00:00:00.001 Started by upstream project "autotest-nightly" build number 4275 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3638 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.076 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.077 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.080 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.113 Fetching changes from the remote Git repository 00:00:00.121 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.158 Using shallow fetch with depth 1 00:00:00.158 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.158 > git --version # timeout=10 00:00:00.202 > git --version # 'git version 2.39.2' 00:00:00.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.219 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.219 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.990 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.003 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.016 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.016 > git config core.sparsecheckout # timeout=10 00:00:05.027 > git read-tree -mu HEAD # timeout=10 00:00:05.042 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.060 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.060 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.172 [Pipeline] Start of Pipeline 00:00:05.187 [Pipeline] library 00:00:05.189 Loading library shm_lib@master 00:00:09.880 Library shm_lib@master is cached. Copying from home. 00:00:09.964 [Pipeline] node 00:00:10.133 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:10.136 [Pipeline] { 00:00:10.154 [Pipeline] catchError 00:00:10.157 [Pipeline] { 00:00:10.172 [Pipeline] wrap 00:00:10.184 [Pipeline] { 00:00:10.197 [Pipeline] stage 00:00:10.199 [Pipeline] { (Prologue) 00:00:10.218 [Pipeline] echo 00:00:10.220 Node: VM-host-WFP1 00:00:10.228 [Pipeline] cleanWs 00:00:10.237 [WS-CLEANUP] Deleting project workspace... 00:00:10.237 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.242 [WS-CLEANUP] done 00:00:10.494 [Pipeline] setCustomBuildProperty 00:00:10.572 [Pipeline] httpRequest 00:00:13.597 [Pipeline] echo 00:00:13.598 Sorcerer 10.211.164.20 is dead 00:00:13.607 [Pipeline] httpRequest 00:00:15.428 [Pipeline] echo 00:00:15.430 Sorcerer 10.211.164.101 is alive 00:00:15.442 [Pipeline] retry 00:00:15.444 [Pipeline] { 00:00:15.458 [Pipeline] httpRequest 00:00:15.463 HttpMethod: GET 00:00:15.464 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.465 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.468 Response Code: HTTP/1.1 200 OK 00:00:15.469 Success: Status code 200 is in the accepted range: 200,404 00:00:15.470 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.896 [Pipeline] } 00:00:15.912 [Pipeline] // retry 00:00:15.920 [Pipeline] sh 00:00:16.210 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:16.229 [Pipeline] httpRequest 00:00:16.610 [Pipeline] echo 00:00:16.612 Sorcerer 10.211.164.101 is alive 00:00:16.623 [Pipeline] retry 00:00:16.625 [Pipeline] { 00:00:16.641 [Pipeline] httpRequest 00:00:16.646 HttpMethod: GET 00:00:16.647 URL: http://10.211.164.101/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:16.648 Sending request to url: http://10.211.164.101/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:16.651 Response Code: HTTP/1.1 200 OK 00:00:16.651 Success: Status code 200 is in the accepted range: 200,404 00:00:16.652 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:33.934 [Pipeline] } 00:00:33.951 [Pipeline] // retry 00:00:33.959 [Pipeline] sh 00:00:34.248 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:37.555 [Pipeline] sh 00:00:37.840 + git -C spdk log --oneline -n5 00:00:37.840 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:37.840 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:37.840 4bcab9fb9 correct kick for CQ full case 00:00:37.840 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:37.840 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:37.859 [Pipeline] writeFile 00:00:37.874 [Pipeline] sh 00:00:38.161 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:38.174 [Pipeline] sh 00:00:38.459 + cat autorun-spdk.conf 00:00:38.459 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.459 SPDK_TEST_NVMF=1 00:00:38.459 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.459 SPDK_TEST_URING=1 00:00:38.459 SPDK_TEST_VFIOUSER=1 00:00:38.459 SPDK_TEST_USDT=1 00:00:38.459 SPDK_RUN_ASAN=1 00:00:38.459 SPDK_RUN_UBSAN=1 00:00:38.459 NET_TYPE=virt 00:00:38.459 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:38.467 RUN_NIGHTLY=1 00:00:38.469 [Pipeline] } 00:00:38.483 [Pipeline] // stage 00:00:38.498 [Pipeline] stage 00:00:38.500 [Pipeline] { (Run VM) 00:00:38.513 [Pipeline] sh 00:00:38.800 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:38.800 + echo 'Start stage prepare_nvme.sh' 00:00:38.800 Start stage prepare_nvme.sh 00:00:38.800 + [[ -n 1 ]] 00:00:38.800 + disk_prefix=ex1 00:00:38.800 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:38.800 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:38.800 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:38.800 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.800 ++ SPDK_TEST_NVMF=1 00:00:38.800 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.800 ++ SPDK_TEST_URING=1 00:00:38.800 ++ SPDK_TEST_VFIOUSER=1 00:00:38.800 ++ SPDK_TEST_USDT=1 00:00:38.800 ++ SPDK_RUN_ASAN=1 00:00:38.800 ++ SPDK_RUN_UBSAN=1 00:00:38.800 ++ NET_TYPE=virt 00:00:38.800 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:38.800 ++ RUN_NIGHTLY=1 00:00:38.800 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:38.800 + nvme_files=() 00:00:38.800 + declare -A nvme_files 00:00:38.800 + backend_dir=/var/lib/libvirt/images/backends 00:00:38.800 + nvme_files['nvme.img']=5G 00:00:38.800 + nvme_files['nvme-cmb.img']=5G 00:00:38.800 + nvme_files['nvme-multi0.img']=4G 00:00:38.800 + nvme_files['nvme-multi1.img']=4G 00:00:38.800 + nvme_files['nvme-multi2.img']=4G 00:00:38.800 + nvme_files['nvme-openstack.img']=8G 00:00:38.800 + nvme_files['nvme-zns.img']=5G 00:00:38.800 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:38.800 + (( SPDK_TEST_FTL == 1 )) 00:00:38.800 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:38.800 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:38.800 + for nvme in "${!nvme_files[@]}" 00:00:38.800 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:38.800 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.800 + for nvme in "${!nvme_files[@]}" 00:00:38.800 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:38.800 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.800 + for nvme in "${!nvme_files[@]}" 00:00:38.800 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:38.800 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:38.800 + for nvme in "${!nvme_files[@]}" 00:00:38.800 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:38.800 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.800 + for nvme in "${!nvme_files[@]}" 00:00:38.800 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:39.061 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:39.061 + for nvme in "${!nvme_files[@]}" 00:00:39.061 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:39.061 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:39.061 + for nvme in "${!nvme_files[@]}" 00:00:39.061 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:39.061 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:39.061 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:39.061 + echo 'End stage prepare_nvme.sh' 00:00:39.061 End stage prepare_nvme.sh 00:00:39.074 [Pipeline] sh 00:00:39.361 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:39.361 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:00:39.361 00:00:39.361 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:39.361 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:39.361 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:39.361 HELP=0 00:00:39.361 DRY_RUN=0 00:00:39.361 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:39.361 NVME_DISKS_TYPE=nvme,nvme, 00:00:39.361 NVME_AUTO_CREATE=0 00:00:39.361 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:39.361 NVME_CMB=,, 00:00:39.361 NVME_PMR=,, 00:00:39.361 NVME_ZNS=,, 00:00:39.361 NVME_MS=,, 00:00:39.361 NVME_FDP=,, 00:00:39.361 SPDK_VAGRANT_DISTRO=fedora39 00:00:39.361 SPDK_VAGRANT_VMCPU=10 00:00:39.361 SPDK_VAGRANT_VMRAM=12288 00:00:39.361 SPDK_VAGRANT_PROVIDER=libvirt 00:00:39.361 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:39.361 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:39.361 SPDK_OPENSTACK_NETWORK=0 00:00:39.361 VAGRANT_PACKAGE_BOX=0 00:00:39.361 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:39.361 FORCE_DISTRO=true 00:00:39.361 VAGRANT_BOX_VERSION= 00:00:39.361 EXTRA_VAGRANTFILES= 00:00:39.361 NIC_MODEL=e1000 00:00:39.361 00:00:39.361 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:39.361 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:42.655 Bringing machine 'default' up with 'libvirt' provider... 00:00:43.596 ==> default: Creating image (snapshot of base box volume). 00:00:43.596 ==> default: Creating domain with the following settings... 00:00:43.596 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731830447_403995cad79201a37ca6 00:00:43.596 ==> default: -- Domain type: kvm 00:00:43.596 ==> default: -- Cpus: 10 00:00:43.596 ==> default: -- Feature: acpi 00:00:43.596 ==> default: -- Feature: apic 00:00:43.596 ==> default: -- Feature: pae 00:00:43.596 ==> default: -- Memory: 12288M 00:00:43.596 ==> default: -- Memory Backing: hugepages: 00:00:43.596 ==> default: -- Management MAC: 00:00:43.596 ==> default: -- Loader: 00:00:43.596 ==> default: -- Nvram: 00:00:43.596 ==> default: -- Base box: spdk/fedora39 00:00:43.596 ==> default: -- Storage pool: default 00:00:43.596 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731830447_403995cad79201a37ca6.img (20G) 00:00:43.596 ==> default: -- Volume Cache: default 00:00:43.596 ==> default: -- Kernel: 00:00:43.596 ==> default: -- Initrd: 00:00:43.596 ==> default: -- Graphics Type: vnc 00:00:43.596 ==> default: -- Graphics Port: -1 00:00:43.596 ==> default: -- Graphics IP: 127.0.0.1 00:00:43.596 ==> default: -- Graphics Password: Not defined 00:00:43.596 ==> default: -- Video Type: cirrus 00:00:43.596 ==> default: -- Video VRAM: 9216 00:00:43.596 ==> default: -- Sound Type: 00:00:43.596 ==> default: -- Keymap: en-us 00:00:43.596 ==> default: -- TPM Path: 00:00:43.596 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:43.596 ==> default: -- Command line args: 00:00:43.596 ==> default: -> value=-device, 00:00:43.596 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:43.596 ==> default: -> value=-drive, 00:00:43.596 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:43.596 ==> default: -> value=-device, 00:00:43.596 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.596 ==> default: -> value=-device, 00:00:43.596 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:43.596 ==> default: -> value=-drive, 00:00:43.596 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:43.596 ==> default: -> value=-device, 00:00:43.596 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.596 ==> default: -> value=-drive, 00:00:43.596 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:43.596 ==> default: -> value=-device, 00:00:43.596 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.596 ==> default: -> value=-drive, 00:00:43.596 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:43.596 ==> default: -> value=-device, 00:00:43.596 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.856 ==> default: Creating shared folders metadata... 00:00:44.116 ==> default: Starting domain. 00:00:46.024 ==> default: Waiting for domain to get an IP address... 00:01:04.138 ==> default: Waiting for SSH to become available... 00:01:05.517 ==> default: Configuring and enabling network interfaces... 00:01:10.794 default: SSH address: 192.168.121.59:22 00:01:10.794 default: SSH username: vagrant 00:01:10.794 default: SSH auth method: private key 00:01:14.085 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:24.171 ==> default: Mounting SSHFS shared folder... 00:01:25.553 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:25.553 ==> default: Checking Mount.. 00:01:27.461 ==> default: Folder Successfully Mounted! 00:01:27.461 ==> default: Running provisioner: file... 00:01:28.401 default: ~/.gitconfig => .gitconfig 00:01:28.970 00:01:28.970 SUCCESS! 00:01:28.970 00:01:28.970 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:28.970 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:28.970 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:28.970 00:01:28.980 [Pipeline] } 00:01:28.995 [Pipeline] // stage 00:01:29.007 [Pipeline] dir 00:01:29.007 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:29.009 [Pipeline] { 00:01:29.023 [Pipeline] catchError 00:01:29.025 [Pipeline] { 00:01:29.040 [Pipeline] sh 00:01:29.327 + vagrant ssh-config --host vagrant 00:01:29.327 + sed -ne /^Host/,$p 00:01:29.327 + tee ssh_conf 00:01:31.864 Host vagrant 00:01:31.864 HostName 192.168.121.59 00:01:31.864 User vagrant 00:01:31.864 Port 22 00:01:31.864 UserKnownHostsFile /dev/null 00:01:31.864 StrictHostKeyChecking no 00:01:31.864 PasswordAuthentication no 00:01:31.864 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:31.864 IdentitiesOnly yes 00:01:31.864 LogLevel FATAL 00:01:31.864 ForwardAgent yes 00:01:31.864 ForwardX11 yes 00:01:31.864 00:01:31.880 [Pipeline] withEnv 00:01:31.883 [Pipeline] { 00:01:31.897 [Pipeline] sh 00:01:32.179 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:32.179 source /etc/os-release 00:01:32.179 [[ -e /image.version ]] && img=$(< /image.version) 00:01:32.179 # Minimal, systemd-like check. 00:01:32.179 if [[ -e /.dockerenv ]]; then 00:01:32.179 # Clear garbage from the node's name: 00:01:32.179 # agt-er_autotest_547-896 -> autotest_547-896 00:01:32.179 # $HOSTNAME is the actual container id 00:01:32.179 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:32.179 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:32.179 # We can assume this is a mount from a host where container is running, 00:01:32.179 # so fetch its hostname to easily identify the target swarm worker. 00:01:32.179 container="$(< /etc/hostname) ($agent)" 00:01:32.179 else 00:01:32.179 # Fallback 00:01:32.179 container=$agent 00:01:32.179 fi 00:01:32.179 fi 00:01:32.179 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:32.179 00:01:32.451 [Pipeline] } 00:01:32.466 [Pipeline] // withEnv 00:01:32.474 [Pipeline] setCustomBuildProperty 00:01:32.489 [Pipeline] stage 00:01:32.492 [Pipeline] { (Tests) 00:01:32.508 [Pipeline] sh 00:01:32.791 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:33.064 [Pipeline] sh 00:01:33.345 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:33.620 [Pipeline] timeout 00:01:33.621 Timeout set to expire in 1 hr 0 min 00:01:33.623 [Pipeline] { 00:01:33.638 [Pipeline] sh 00:01:33.921 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:34.490 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:34.501 [Pipeline] sh 00:01:34.783 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:35.057 [Pipeline] sh 00:01:35.339 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:35.617 [Pipeline] sh 00:01:35.969 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:36.229 ++ readlink -f spdk_repo 00:01:36.229 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:36.229 + [[ -n /home/vagrant/spdk_repo ]] 00:01:36.229 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:36.229 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:36.229 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:36.229 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:36.229 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:36.229 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:36.229 + cd /home/vagrant/spdk_repo 00:01:36.229 + source /etc/os-release 00:01:36.229 ++ NAME='Fedora Linux' 00:01:36.229 ++ VERSION='39 (Cloud Edition)' 00:01:36.229 ++ ID=fedora 00:01:36.229 ++ VERSION_ID=39 00:01:36.229 ++ VERSION_CODENAME= 00:01:36.229 ++ PLATFORM_ID=platform:f39 00:01:36.229 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:36.229 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:36.229 ++ LOGO=fedora-logo-icon 00:01:36.229 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:36.229 ++ HOME_URL=https://fedoraproject.org/ 00:01:36.229 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:36.229 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:36.229 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:36.229 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:36.229 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:36.229 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:36.229 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:36.229 ++ SUPPORT_END=2024-11-12 00:01:36.229 ++ VARIANT='Cloud Edition' 00:01:36.229 ++ VARIANT_ID=cloud 00:01:36.229 + uname -a 00:01:36.229 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:36.229 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:36.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:36.798 Hugepages 00:01:36.798 node hugesize free / total 00:01:36.798 node0 1048576kB 0 / 0 00:01:36.798 node0 2048kB 0 / 0 00:01:36.798 00:01:36.798 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:36.798 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:36.798 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:36.798 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:36.798 + rm -f /tmp/spdk-ld-path 00:01:36.798 + source autorun-spdk.conf 00:01:36.798 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.798 ++ SPDK_TEST_NVMF=1 00:01:36.798 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.798 ++ SPDK_TEST_URING=1 00:01:36.798 ++ SPDK_TEST_VFIOUSER=1 00:01:36.798 ++ SPDK_TEST_USDT=1 00:01:36.798 ++ SPDK_RUN_ASAN=1 00:01:36.798 ++ SPDK_RUN_UBSAN=1 00:01:36.798 ++ NET_TYPE=virt 00:01:36.798 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.798 ++ RUN_NIGHTLY=1 00:01:36.798 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:36.798 + [[ -n '' ]] 00:01:36.798 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:37.058 + for M in /var/spdk/build-*-manifest.txt 00:01:37.058 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:37.058 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:37.058 + for M in /var/spdk/build-*-manifest.txt 00:01:37.058 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:37.058 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:37.058 + for M in /var/spdk/build-*-manifest.txt 00:01:37.058 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:37.058 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:37.058 ++ uname 00:01:37.058 + [[ Linux == \L\i\n\u\x ]] 00:01:37.058 + sudo dmesg -T 00:01:37.058 + sudo dmesg --clear 00:01:37.058 + dmesg_pid=5223 00:01:37.058 + sudo dmesg -Tw 00:01:37.058 + [[ Fedora Linux == FreeBSD ]] 00:01:37.058 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.058 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.058 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:37.058 + [[ -x /usr/src/fio-static/fio ]] 00:01:37.058 + export FIO_BIN=/usr/src/fio-static/fio 00:01:37.058 + FIO_BIN=/usr/src/fio-static/fio 00:01:37.058 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:37.058 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:37.058 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:37.058 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.058 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.058 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:37.058 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.058 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.058 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:37.058 08:01:42 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:37.058 08:01:42 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:37.058 08:01:42 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.058 08:01:42 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:37.058 08:01:42 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.058 08:01:42 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:37.058 08:01:42 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_VFIOUSER=1 00:01:37.058 08:01:42 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_USDT=1 00:01:37.058 08:01:42 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_ASAN=1 00:01:37.058 08:01:42 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_RUN_UBSAN=1 00:01:37.058 08:01:42 -- spdk_repo/autorun-spdk.conf@9 -- $ NET_TYPE=virt 00:01:37.058 08:01:42 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.058 08:01:42 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:37.058 08:01:42 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:37.058 08:01:42 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:37.318 08:01:42 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:37.318 08:01:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:37.318 08:01:42 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:37.318 08:01:42 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:37.318 08:01:42 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:37.318 08:01:42 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:37.318 08:01:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.318 08:01:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.318 08:01:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.318 08:01:42 -- paths/export.sh@5 -- $ export PATH 00:01:37.318 08:01:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.318 08:01:42 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:37.318 08:01:42 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:37.318 08:01:42 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731830502.XXXXXX 00:01:37.318 08:01:42 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731830502.cw9d7L 00:01:37.318 08:01:42 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:37.318 08:01:42 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:37.318 08:01:42 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:37.318 08:01:42 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:37.318 08:01:42 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:37.318 08:01:42 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:37.318 08:01:42 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:37.318 08:01:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.318 08:01:42 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:01:37.318 08:01:42 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:37.318 08:01:42 -- pm/common@17 -- $ local monitor 00:01:37.318 08:01:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.318 08:01:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.318 08:01:42 -- pm/common@25 -- $ sleep 1 00:01:37.318 08:01:42 -- pm/common@21 -- $ date +%s 00:01:37.318 08:01:42 -- pm/common@21 -- $ date +%s 00:01:37.318 08:01:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731830502 00:01:37.318 08:01:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731830502 00:01:37.318 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731830502_collect-cpu-load.pm.log 00:01:37.318 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731830502_collect-vmstat.pm.log 00:01:38.256 08:01:43 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:38.256 08:01:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.256 08:01:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.256 08:01:43 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:38.256 08:01:43 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.256 Sun Nov 17 08:01:43 AM UTC 2024 00:01:38.256 08:01:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.256 v25.01-pre-189-g83e8405e4 00:01:38.256 08:01:43 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:38.256 08:01:43 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:38.256 08:01:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:38.256 08:01:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:38.256 08:01:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.256 ************************************ 00:01:38.256 START TEST asan 00:01:38.256 ************************************ 00:01:38.256 using asan 00:01:38.256 08:01:43 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:38.256 00:01:38.256 real 0m0.000s 00:01:38.256 user 0m0.000s 00:01:38.256 sys 0m0.000s 00:01:38.256 08:01:43 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:38.256 ************************************ 00:01:38.256 END TEST asan 00:01:38.256 08:01:43 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.256 ************************************ 00:01:38.516 08:01:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.516 08:01:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.516 08:01:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:38.516 08:01:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:38.516 08:01:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.516 ************************************ 00:01:38.516 START TEST ubsan 00:01:38.516 ************************************ 00:01:38.516 using ubsan 00:01:38.516 08:01:43 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:38.516 00:01:38.516 real 0m0.000s 00:01:38.516 user 0m0.000s 00:01:38.516 sys 0m0.000s 00:01:38.516 08:01:43 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:38.516 ************************************ 00:01:38.516 END TEST ubsan 00:01:38.516 08:01:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.516 ************************************ 00:01:38.516 08:01:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:38.516 08:01:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:38.516 08:01:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:38.516 08:01:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:38.516 08:01:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:38.516 08:01:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:38.516 08:01:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:38.516 08:01:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:38.516 08:01:43 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:01:38.776 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:38.776 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:39.345 Using 'verbs' RDMA provider 00:01:58.822 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:13.715 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:13.715 Creating mk/config.mk...done. 00:02:13.715 Creating mk/cc.flags.mk...done. 00:02:13.715 Type 'make' to build. 00:02:13.715 08:02:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:13.715 08:02:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:13.715 08:02:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:13.715 08:02:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.715 ************************************ 00:02:13.715 START TEST make 00:02:13.715 ************************************ 00:02:13.715 08:02:17 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:13.715 make[1]: Nothing to be done for 'all'. 00:02:14.408 The Meson build system 00:02:14.408 Version: 1.5.0 00:02:14.408 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:14.408 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:14.408 Build type: native build 00:02:14.408 Project name: libvfio-user 00:02:14.408 Project version: 0.0.1 00:02:14.408 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.408 C linker for the host machine: cc ld.bfd 2.40-14 00:02:14.408 Host machine cpu family: x86_64 00:02:14.408 Host machine cpu: x86_64 00:02:14.408 Run-time dependency threads found: YES 00:02:14.408 Library dl found: YES 00:02:14.408 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.408 Run-time dependency json-c found: YES 0.17 00:02:14.408 Run-time dependency cmocka found: YES 1.1.7 00:02:14.408 Program pytest-3 found: NO 00:02:14.408 Program flake8 found: NO 00:02:14.408 Program misspell-fixer found: NO 00:02:14.408 Program restructuredtext-lint found: NO 00:02:14.408 Program valgrind found: YES (/usr/bin/valgrind) 00:02:14.408 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.408 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.408 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.408 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:14.408 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:14.408 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:14.408 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:14.408 Build targets in project: 8 00:02:14.408 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:14.408 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:14.408 00:02:14.408 libvfio-user 0.0.1 00:02:14.408 00:02:14.408 User defined options 00:02:14.408 buildtype : debug 00:02:14.408 default_library: shared 00:02:14.408 libdir : /usr/local/lib 00:02:14.409 00:02:14.409 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.011 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:15.011 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:15.011 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:15.011 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:15.011 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:15.011 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:15.011 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:15.011 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:15.011 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:15.011 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:15.011 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:15.011 [11/37] Compiling C object samples/null.p/null.c.o 00:02:15.270 [12/37] Compiling C object samples/client.p/client.c.o 00:02:15.271 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:15.271 [14/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:15.271 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:15.271 [16/37] Linking target samples/client 00:02:15.271 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:15.271 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:15.271 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:15.271 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:15.271 [21/37] Compiling C object samples/server.p/server.c.o 00:02:15.271 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:15.271 [23/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:15.271 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:15.271 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:15.271 [26/37] Linking target lib/libvfio-user.so.0.0.1 00:02:15.271 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:15.271 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:15.271 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:15.271 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:15.271 [31/37] Linking target test/unit_tests 00:02:15.530 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:15.530 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:15.530 [34/37] Linking target samples/server 00:02:15.530 [35/37] Linking target samples/lspci 00:02:15.530 [36/37] Linking target samples/null 00:02:15.530 [37/37] Linking target samples/gpio-pci-idio-16 00:02:15.530 INFO: autodetecting backend as ninja 00:02:15.530 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:15.530 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:15.789 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:15.789 ninja: no work to do. 00:02:23.916 The Meson build system 00:02:23.916 Version: 1.5.0 00:02:23.916 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:23.916 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:23.916 Build type: native build 00:02:23.916 Program cat found: YES (/usr/bin/cat) 00:02:23.916 Project name: DPDK 00:02:23.916 Project version: 24.03.0 00:02:23.916 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:23.916 C linker for the host machine: cc ld.bfd 2.40-14 00:02:23.916 Host machine cpu family: x86_64 00:02:23.916 Host machine cpu: x86_64 00:02:23.916 Message: ## Building in Developer Mode ## 00:02:23.916 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.916 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:23.916 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.916 Program python3 found: YES (/usr/bin/python3) 00:02:23.916 Program cat found: YES (/usr/bin/cat) 00:02:23.916 Compiler for C supports arguments -march=native: YES 00:02:23.916 Checking for size of "void *" : 8 00:02:23.916 Checking for size of "void *" : 8 (cached) 00:02:23.916 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:23.916 Library m found: YES 00:02:23.916 Library numa found: YES 00:02:23.916 Has header "numaif.h" : YES 00:02:23.916 Library fdt found: NO 00:02:23.916 Library execinfo found: NO 00:02:23.916 Has header "execinfo.h" : YES 00:02:23.916 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:23.916 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.916 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.916 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.916 Run-time dependency openssl found: YES 3.1.1 00:02:23.916 Run-time dependency libpcap found: YES 1.10.4 00:02:23.916 Has header "pcap.h" with dependency libpcap: YES 00:02:23.916 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.916 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.916 Compiler for C supports arguments -Wformat: YES 00:02:23.916 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.916 Compiler for C supports arguments -Wformat-security: NO 00:02:23.916 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.916 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.916 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.916 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.916 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.916 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.916 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.916 Compiler for C supports arguments -Wundef: YES 00:02:23.916 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.916 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.916 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.916 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.916 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.916 Program objdump found: YES (/usr/bin/objdump) 00:02:23.916 Compiler for C supports arguments -mavx512f: YES 00:02:23.916 Checking if "AVX512 checking" compiles: YES 00:02:23.916 Fetching value of define "__SSE4_2__" : 1 00:02:23.916 Fetching value of define "__AES__" : 1 00:02:23.916 Fetching value of define "__AVX__" : 1 00:02:23.916 Fetching value of define "__AVX2__" : 1 00:02:23.916 Fetching value of define "__AVX512BW__" : 1 00:02:23.916 Fetching value of define "__AVX512CD__" : 1 00:02:23.916 Fetching value of define "__AVX512DQ__" : 1 00:02:23.916 Fetching value of define "__AVX512F__" : 1 00:02:23.916 Fetching value of define "__AVX512VL__" : 1 00:02:23.916 Fetching value of define "__PCLMUL__" : 1 00:02:23.916 Fetching value of define "__RDRND__" : 1 00:02:23.916 Fetching value of define "__RDSEED__" : 1 00:02:23.916 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:23.916 Fetching value of define "__znver1__" : (undefined) 00:02:23.916 Fetching value of define "__znver2__" : (undefined) 00:02:23.916 Fetching value of define "__znver3__" : (undefined) 00:02:23.916 Fetching value of define "__znver4__" : (undefined) 00:02:23.916 Library asan found: YES 00:02:23.916 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.916 Message: lib/log: Defining dependency "log" 00:02:23.916 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.916 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.916 Library rt found: YES 00:02:23.916 Checking for function "getentropy" : NO 00:02:23.916 Message: lib/eal: Defining dependency "eal" 00:02:23.916 Message: lib/ring: Defining dependency "ring" 00:02:23.916 Message: lib/rcu: Defining dependency "rcu" 00:02:23.916 Message: lib/mempool: Defining dependency "mempool" 00:02:23.916 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.916 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.916 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.916 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.916 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.916 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.916 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:23.916 Compiler for C supports arguments -mpclmul: YES 00:02:23.916 Compiler for C supports arguments -maes: YES 00:02:23.916 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.916 Compiler for C supports arguments -mavx512bw: YES 00:02:23.916 Compiler for C supports arguments -mavx512dq: YES 00:02:23.916 Compiler for C supports arguments -mavx512vl: YES 00:02:23.916 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.917 Compiler for C supports arguments -mavx2: YES 00:02:23.917 Compiler for C supports arguments -mavx: YES 00:02:23.917 Message: lib/net: Defining dependency "net" 00:02:23.917 Message: lib/meter: Defining dependency "meter" 00:02:23.917 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.917 Message: lib/pci: Defining dependency "pci" 00:02:23.917 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.917 Message: lib/hash: Defining dependency "hash" 00:02:23.917 Message: lib/timer: Defining dependency "timer" 00:02:23.917 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.917 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.917 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.917 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.917 Message: lib/power: Defining dependency "power" 00:02:23.917 Message: lib/reorder: Defining dependency "reorder" 00:02:23.917 Message: lib/security: Defining dependency "security" 00:02:23.917 Has header "linux/userfaultfd.h" : YES 00:02:23.917 Has header "linux/vduse.h" : YES 00:02:23.917 Message: lib/vhost: Defining dependency "vhost" 00:02:23.917 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.917 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.917 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.917 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.917 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:23.917 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:23.917 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:23.917 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:23.917 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:23.917 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:23.917 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.917 Configuring doxy-api-html.conf using configuration 00:02:23.917 Configuring doxy-api-man.conf using configuration 00:02:23.917 Program mandb found: YES (/usr/bin/mandb) 00:02:23.917 Program sphinx-build found: NO 00:02:23.917 Configuring rte_build_config.h using configuration 00:02:23.917 Message: 00:02:23.917 ================= 00:02:23.917 Applications Enabled 00:02:23.917 ================= 00:02:23.917 00:02:23.917 apps: 00:02:23.917 00:02:23.917 00:02:23.917 Message: 00:02:23.917 ================= 00:02:23.917 Libraries Enabled 00:02:23.917 ================= 00:02:23.917 00:02:23.917 libs: 00:02:23.917 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.917 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:23.917 cryptodev, dmadev, power, reorder, security, vhost, 00:02:23.917 00:02:23.917 Message: 00:02:23.917 =============== 00:02:23.917 Drivers Enabled 00:02:23.917 =============== 00:02:23.917 00:02:23.917 common: 00:02:23.917 00:02:23.917 bus: 00:02:23.917 pci, vdev, 00:02:23.917 mempool: 00:02:23.917 ring, 00:02:23.917 dma: 00:02:23.917 00:02:23.917 net: 00:02:23.917 00:02:23.917 crypto: 00:02:23.917 00:02:23.917 compress: 00:02:23.917 00:02:23.917 vdpa: 00:02:23.917 00:02:23.917 00:02:23.917 Message: 00:02:23.917 ================= 00:02:23.917 Content Skipped 00:02:23.917 ================= 00:02:23.917 00:02:23.917 apps: 00:02:23.917 dumpcap: explicitly disabled via build config 00:02:23.917 graph: explicitly disabled via build config 00:02:23.917 pdump: explicitly disabled via build config 00:02:23.917 proc-info: explicitly disabled via build config 00:02:23.917 test-acl: explicitly disabled via build config 00:02:23.917 test-bbdev: explicitly disabled via build config 00:02:23.917 test-cmdline: explicitly disabled via build config 00:02:23.917 test-compress-perf: explicitly disabled via build config 00:02:23.917 test-crypto-perf: explicitly disabled via build config 00:02:23.917 test-dma-perf: explicitly disabled via build config 00:02:23.917 test-eventdev: explicitly disabled via build config 00:02:23.917 test-fib: explicitly disabled via build config 00:02:23.917 test-flow-perf: explicitly disabled via build config 00:02:23.917 test-gpudev: explicitly disabled via build config 00:02:23.917 test-mldev: explicitly disabled via build config 00:02:23.917 test-pipeline: explicitly disabled via build config 00:02:23.917 test-pmd: explicitly disabled via build config 00:02:23.917 test-regex: explicitly disabled via build config 00:02:23.917 test-sad: explicitly disabled via build config 00:02:23.917 test-security-perf: explicitly disabled via build config 00:02:23.917 00:02:23.917 libs: 00:02:23.917 argparse: explicitly disabled via build config 00:02:23.917 metrics: explicitly disabled via build config 00:02:23.917 acl: explicitly disabled via build config 00:02:23.917 bbdev: explicitly disabled via build config 00:02:23.917 bitratestats: explicitly disabled via build config 00:02:23.917 bpf: explicitly disabled via build config 00:02:23.917 cfgfile: explicitly disabled via build config 00:02:23.917 distributor: explicitly disabled via build config 00:02:23.917 efd: explicitly disabled via build config 00:02:23.917 eventdev: explicitly disabled via build config 00:02:23.917 dispatcher: explicitly disabled via build config 00:02:23.917 gpudev: explicitly disabled via build config 00:02:23.917 gro: explicitly disabled via build config 00:02:23.917 gso: explicitly disabled via build config 00:02:23.917 ip_frag: explicitly disabled via build config 00:02:23.917 jobstats: explicitly disabled via build config 00:02:23.917 latencystats: explicitly disabled via build config 00:02:23.917 lpm: explicitly disabled via build config 00:02:23.917 member: explicitly disabled via build config 00:02:23.917 pcapng: explicitly disabled via build config 00:02:23.917 rawdev: explicitly disabled via build config 00:02:23.917 regexdev: explicitly disabled via build config 00:02:23.917 mldev: explicitly disabled via build config 00:02:23.917 rib: explicitly disabled via build config 00:02:23.917 sched: explicitly disabled via build config 00:02:23.917 stack: explicitly disabled via build config 00:02:23.917 ipsec: explicitly disabled via build config 00:02:23.917 pdcp: explicitly disabled via build config 00:02:23.917 fib: explicitly disabled via build config 00:02:23.917 port: explicitly disabled via build config 00:02:23.917 pdump: explicitly disabled via build config 00:02:23.917 table: explicitly disabled via build config 00:02:23.917 pipeline: explicitly disabled via build config 00:02:23.917 graph: explicitly disabled via build config 00:02:23.917 node: explicitly disabled via build config 00:02:23.917 00:02:23.917 drivers: 00:02:23.917 common/cpt: not in enabled drivers build config 00:02:23.917 common/dpaax: not in enabled drivers build config 00:02:23.917 common/iavf: not in enabled drivers build config 00:02:23.917 common/idpf: not in enabled drivers build config 00:02:23.917 common/ionic: not in enabled drivers build config 00:02:23.917 common/mvep: not in enabled drivers build config 00:02:23.917 common/octeontx: not in enabled drivers build config 00:02:23.917 bus/auxiliary: not in enabled drivers build config 00:02:23.917 bus/cdx: not in enabled drivers build config 00:02:23.917 bus/dpaa: not in enabled drivers build config 00:02:23.917 bus/fslmc: not in enabled drivers build config 00:02:23.917 bus/ifpga: not in enabled drivers build config 00:02:23.917 bus/platform: not in enabled drivers build config 00:02:23.917 bus/uacce: not in enabled drivers build config 00:02:23.917 bus/vmbus: not in enabled drivers build config 00:02:23.917 common/cnxk: not in enabled drivers build config 00:02:23.917 common/mlx5: not in enabled drivers build config 00:02:23.917 common/nfp: not in enabled drivers build config 00:02:23.917 common/nitrox: not in enabled drivers build config 00:02:23.917 common/qat: not in enabled drivers build config 00:02:23.917 common/sfc_efx: not in enabled drivers build config 00:02:23.917 mempool/bucket: not in enabled drivers build config 00:02:23.917 mempool/cnxk: not in enabled drivers build config 00:02:23.917 mempool/dpaa: not in enabled drivers build config 00:02:23.917 mempool/dpaa2: not in enabled drivers build config 00:02:23.917 mempool/octeontx: not in enabled drivers build config 00:02:23.917 mempool/stack: not in enabled drivers build config 00:02:23.917 dma/cnxk: not in enabled drivers build config 00:02:23.917 dma/dpaa: not in enabled drivers build config 00:02:23.917 dma/dpaa2: not in enabled drivers build config 00:02:23.917 dma/hisilicon: not in enabled drivers build config 00:02:23.917 dma/idxd: not in enabled drivers build config 00:02:23.917 dma/ioat: not in enabled drivers build config 00:02:23.917 dma/skeleton: not in enabled drivers build config 00:02:23.917 net/af_packet: not in enabled drivers build config 00:02:23.917 net/af_xdp: not in enabled drivers build config 00:02:23.917 net/ark: not in enabled drivers build config 00:02:23.917 net/atlantic: not in enabled drivers build config 00:02:23.917 net/avp: not in enabled drivers build config 00:02:23.917 net/axgbe: not in enabled drivers build config 00:02:23.917 net/bnx2x: not in enabled drivers build config 00:02:23.917 net/bnxt: not in enabled drivers build config 00:02:23.917 net/bonding: not in enabled drivers build config 00:02:23.917 net/cnxk: not in enabled drivers build config 00:02:23.917 net/cpfl: not in enabled drivers build config 00:02:23.917 net/cxgbe: not in enabled drivers build config 00:02:23.917 net/dpaa: not in enabled drivers build config 00:02:23.917 net/dpaa2: not in enabled drivers build config 00:02:23.917 net/e1000: not in enabled drivers build config 00:02:23.917 net/ena: not in enabled drivers build config 00:02:23.917 net/enetc: not in enabled drivers build config 00:02:23.917 net/enetfec: not in enabled drivers build config 00:02:23.917 net/enic: not in enabled drivers build config 00:02:23.917 net/failsafe: not in enabled drivers build config 00:02:23.917 net/fm10k: not in enabled drivers build config 00:02:23.917 net/gve: not in enabled drivers build config 00:02:23.917 net/hinic: not in enabled drivers build config 00:02:23.917 net/hns3: not in enabled drivers build config 00:02:23.917 net/i40e: not in enabled drivers build config 00:02:23.917 net/iavf: not in enabled drivers build config 00:02:23.918 net/ice: not in enabled drivers build config 00:02:23.918 net/idpf: not in enabled drivers build config 00:02:23.918 net/igc: not in enabled drivers build config 00:02:23.918 net/ionic: not in enabled drivers build config 00:02:23.918 net/ipn3ke: not in enabled drivers build config 00:02:23.918 net/ixgbe: not in enabled drivers build config 00:02:23.918 net/mana: not in enabled drivers build config 00:02:23.918 net/memif: not in enabled drivers build config 00:02:23.918 net/mlx4: not in enabled drivers build config 00:02:23.918 net/mlx5: not in enabled drivers build config 00:02:23.918 net/mvneta: not in enabled drivers build config 00:02:23.918 net/mvpp2: not in enabled drivers build config 00:02:23.918 net/netvsc: not in enabled drivers build config 00:02:23.918 net/nfb: not in enabled drivers build config 00:02:23.918 net/nfp: not in enabled drivers build config 00:02:23.918 net/ngbe: not in enabled drivers build config 00:02:23.918 net/null: not in enabled drivers build config 00:02:23.918 net/octeontx: not in enabled drivers build config 00:02:23.918 net/octeon_ep: not in enabled drivers build config 00:02:23.918 net/pcap: not in enabled drivers build config 00:02:23.918 net/pfe: not in enabled drivers build config 00:02:23.918 net/qede: not in enabled drivers build config 00:02:23.918 net/ring: not in enabled drivers build config 00:02:23.918 net/sfc: not in enabled drivers build config 00:02:23.918 net/softnic: not in enabled drivers build config 00:02:23.918 net/tap: not in enabled drivers build config 00:02:23.918 net/thunderx: not in enabled drivers build config 00:02:23.918 net/txgbe: not in enabled drivers build config 00:02:23.918 net/vdev_netvsc: not in enabled drivers build config 00:02:23.918 net/vhost: not in enabled drivers build config 00:02:23.918 net/virtio: not in enabled drivers build config 00:02:23.918 net/vmxnet3: not in enabled drivers build config 00:02:23.918 raw/*: missing internal dependency, "rawdev" 00:02:23.918 crypto/armv8: not in enabled drivers build config 00:02:23.918 crypto/bcmfs: not in enabled drivers build config 00:02:23.918 crypto/caam_jr: not in enabled drivers build config 00:02:23.918 crypto/ccp: not in enabled drivers build config 00:02:23.918 crypto/cnxk: not in enabled drivers build config 00:02:23.918 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.918 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.918 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.918 crypto/mlx5: not in enabled drivers build config 00:02:23.918 crypto/mvsam: not in enabled drivers build config 00:02:23.918 crypto/nitrox: not in enabled drivers build config 00:02:23.918 crypto/null: not in enabled drivers build config 00:02:23.918 crypto/octeontx: not in enabled drivers build config 00:02:23.918 crypto/openssl: not in enabled drivers build config 00:02:23.918 crypto/scheduler: not in enabled drivers build config 00:02:23.918 crypto/uadk: not in enabled drivers build config 00:02:23.918 crypto/virtio: not in enabled drivers build config 00:02:23.918 compress/isal: not in enabled drivers build config 00:02:23.918 compress/mlx5: not in enabled drivers build config 00:02:23.918 compress/nitrox: not in enabled drivers build config 00:02:23.918 compress/octeontx: not in enabled drivers build config 00:02:23.918 compress/zlib: not in enabled drivers build config 00:02:23.918 regex/*: missing internal dependency, "regexdev" 00:02:23.918 ml/*: missing internal dependency, "mldev" 00:02:23.918 vdpa/ifc: not in enabled drivers build config 00:02:23.918 vdpa/mlx5: not in enabled drivers build config 00:02:23.918 vdpa/nfp: not in enabled drivers build config 00:02:23.918 vdpa/sfc: not in enabled drivers build config 00:02:23.918 event/*: missing internal dependency, "eventdev" 00:02:23.918 baseband/*: missing internal dependency, "bbdev" 00:02:23.918 gpu/*: missing internal dependency, "gpudev" 00:02:23.918 00:02:23.918 00:02:23.918 Build targets in project: 85 00:02:23.918 00:02:23.918 DPDK 24.03.0 00:02:23.918 00:02:23.918 User defined options 00:02:23.918 buildtype : debug 00:02:23.918 default_library : shared 00:02:23.918 libdir : lib 00:02:23.918 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:23.918 b_sanitize : address 00:02:23.918 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:23.918 c_link_args : 00:02:23.918 cpu_instruction_set: native 00:02:23.918 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:23.918 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:23.918 enable_docs : false 00:02:23.918 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:23.918 enable_kmods : false 00:02:23.918 max_lcores : 128 00:02:23.918 tests : false 00:02:23.918 00:02:23.918 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:24.178 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:24.436 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:24.436 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:24.436 [3/268] Linking static target lib/librte_kvargs.a 00:02:24.436 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:24.436 [5/268] Linking static target lib/librte_log.a 00:02:24.436 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:24.695 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.695 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:24.695 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.695 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.955 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.955 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.955 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.955 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:24.955 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.955 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:24.955 [17/268] Linking static target lib/librte_telemetry.a 00:02:24.955 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:25.214 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:25.474 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.474 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:25.474 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:25.474 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:25.474 [24/268] Linking target lib/librte_log.so.24.1 00:02:25.474 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:25.474 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:25.474 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:25.733 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:25.733 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:25.733 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:25.733 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:25.733 [32/268] Linking target lib/librte_kvargs.so.24.1 00:02:25.992 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.992 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:25.992 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.992 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:25.992 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:25.992 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:26.252 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:26.252 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:26.252 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:26.252 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:26.252 [43/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:26.252 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:26.252 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:26.252 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:26.511 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:26.511 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:26.511 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:26.770 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:26.770 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.770 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.770 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.770 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.770 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.770 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:27.030 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:27.030 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:27.289 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:27.289 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:27.289 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:27.289 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:27.289 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:27.289 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:27.289 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:27.549 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:27.549 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:27.549 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.809 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.809 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.809 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:27.809 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:28.068 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:28.068 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:28.068 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:28.068 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:28.068 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:28.068 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:28.068 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:28.328 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:28.328 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:28.328 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.328 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.328 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.587 [85/268] Linking static target lib/librte_eal.a 00:02:28.588 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.588 [87/268] Linking static target lib/librte_ring.a 00:02:28.588 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:28.588 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:28.588 [90/268] Linking static target lib/librte_rcu.a 00:02:28.588 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.588 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:28.847 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:28.847 [94/268] Linking static target lib/librte_mempool.a 00:02:28.847 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:28.847 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:29.107 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:29.107 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.107 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.107 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:29.107 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:29.366 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:29.366 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:29.366 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:29.366 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:29.366 [106/268] Linking static target lib/librte_mbuf.a 00:02:29.366 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:29.366 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:29.366 [109/268] Linking static target lib/librte_net.a 00:02:29.626 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:29.626 [111/268] Linking static target lib/librte_meter.a 00:02:29.626 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:29.626 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:29.885 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:29.885 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.885 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.145 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.145 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.145 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:30.405 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:30.405 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:30.405 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.665 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:30.665 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:30.665 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:30.665 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:30.924 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:30.924 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:30.924 [129/268] Linking static target lib/librte_pci.a 00:02:30.924 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:30.924 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:30.924 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:30.924 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:30.924 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:31.184 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:31.184 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:31.184 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:31.184 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.184 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.184 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.184 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:31.184 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:31.184 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:31.184 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:31.184 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:31.445 [146/268] Linking static target lib/librte_cmdline.a 00:02:31.445 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:31.704 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:31.704 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:31.704 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:31.704 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:31.963 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:31.963 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:31.963 [154/268] Linking static target lib/librte_timer.a 00:02:32.222 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:32.222 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:32.222 [157/268] Linking static target lib/librte_ethdev.a 00:02:32.222 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:32.222 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:32.222 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:32.483 [161/268] Linking static target lib/librte_compressdev.a 00:02:32.483 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:32.483 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:32.483 [164/268] Linking static target lib/librte_hash.a 00:02:32.743 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.743 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:32.743 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:32.743 [168/268] Linking static target lib/librte_dmadev.a 00:02:32.743 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:33.003 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.003 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:33.003 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:33.262 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:33.262 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:33.262 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.522 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:33.522 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:33.522 [178/268] Linking static target lib/librte_cryptodev.a 00:02:33.522 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:33.522 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:33.522 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:33.522 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:33.522 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.781 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.781 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:33.781 [186/268] Linking static target lib/librte_power.a 00:02:34.041 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:34.041 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:34.041 [189/268] Linking static target lib/librte_reorder.a 00:02:34.041 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:34.300 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:34.300 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:34.300 [193/268] Linking static target lib/librte_security.a 00:02:34.560 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.820 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:35.080 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.080 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.080 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:35.080 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:35.080 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:35.340 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:35.599 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:35.599 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:35.599 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:35.600 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:35.860 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:35.860 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:35.860 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:35.860 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:35.860 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:36.119 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.119 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:36.119 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:36.119 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.119 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.380 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.380 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:36.380 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.380 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:36.380 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:36.380 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:36.380 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:36.640 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.640 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.640 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:36.640 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.899 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.838 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.131 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.391 [230/268] Linking target lib/librte_eal.so.24.1 00:02:41.391 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:41.391 [232/268] Linking target lib/librte_meter.so.24.1 00:02:41.391 [233/268] Linking target lib/librte_ring.so.24.1 00:02:41.391 [234/268] Linking target lib/librte_pci.so.24.1 00:02:41.391 [235/268] Linking target lib/librte_timer.so.24.1 00:02:41.391 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:41.651 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:41.651 [238/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:41.651 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:41.651 [240/268] Linking static target lib/librte_vhost.a 00:02:41.651 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:41.651 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:41.652 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:41.652 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:41.652 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:41.652 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:41.652 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:41.652 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.652 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:41.912 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:41.912 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:41.912 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:41.912 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:41.912 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:41.912 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:41.912 [256/268] Linking target lib/librte_net.so.24.1 00:02:41.912 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:42.172 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:42.172 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:42.172 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:42.172 [261/268] Linking target lib/librte_security.so.24.1 00:02:42.172 [262/268] Linking target lib/librte_hash.so.24.1 00:02:42.172 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:42.432 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:42.432 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:42.432 [266/268] Linking target lib/librte_power.so.24.1 00:02:44.425 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.425 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:44.426 INFO: autodetecting backend as ninja 00:02:44.426 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:02.526 CC lib/log/log.o 00:03:02.526 CC lib/log/log_deprecated.o 00:03:02.526 CC lib/log/log_flags.o 00:03:02.526 CC lib/ut/ut.o 00:03:02.526 CC lib/ut_mock/mock.o 00:03:02.526 LIB libspdk_log.a 00:03:02.526 LIB libspdk_ut.a 00:03:02.526 LIB libspdk_ut_mock.a 00:03:02.526 SO libspdk_ut.so.2.0 00:03:02.526 SO libspdk_log.so.7.1 00:03:02.526 SO libspdk_ut_mock.so.6.0 00:03:02.526 SYMLINK libspdk_ut.so 00:03:02.526 SYMLINK libspdk_log.so 00:03:02.526 SYMLINK libspdk_ut_mock.so 00:03:02.526 CC lib/ioat/ioat.o 00:03:02.526 CXX lib/trace_parser/trace.o 00:03:02.526 CC lib/util/base64.o 00:03:02.526 CC lib/dma/dma.o 00:03:02.786 CC lib/util/cpuset.o 00:03:02.786 CC lib/util/crc32.o 00:03:02.786 CC lib/util/bit_array.o 00:03:02.786 CC lib/util/crc16.o 00:03:02.786 CC lib/util/crc32c.o 00:03:02.786 CC lib/vfio_user/host/vfio_user_pci.o 00:03:02.786 CC lib/util/crc32_ieee.o 00:03:02.786 CC lib/util/crc64.o 00:03:02.786 CC lib/util/dif.o 00:03:02.786 CC lib/util/fd.o 00:03:02.786 LIB libspdk_dma.a 00:03:02.786 CC lib/util/fd_group.o 00:03:02.786 SO libspdk_dma.so.5.0 00:03:02.786 CC lib/vfio_user/host/vfio_user.o 00:03:02.786 LIB libspdk_ioat.a 00:03:03.045 CC lib/util/file.o 00:03:03.045 SYMLINK libspdk_dma.so 00:03:03.045 SO libspdk_ioat.so.7.0 00:03:03.045 CC lib/util/hexlify.o 00:03:03.045 CC lib/util/iov.o 00:03:03.045 CC lib/util/math.o 00:03:03.045 SYMLINK libspdk_ioat.so 00:03:03.045 CC lib/util/net.o 00:03:03.045 CC lib/util/pipe.o 00:03:03.045 CC lib/util/strerror_tls.o 00:03:03.045 CC lib/util/string.o 00:03:03.045 LIB libspdk_vfio_user.a 00:03:03.045 CC lib/util/uuid.o 00:03:03.045 SO libspdk_vfio_user.so.5.0 00:03:03.045 CC lib/util/xor.o 00:03:03.045 CC lib/util/zipf.o 00:03:03.305 CC lib/util/md5.o 00:03:03.305 SYMLINK libspdk_vfio_user.so 00:03:03.305 LIB libspdk_util.a 00:03:03.565 SO libspdk_util.so.10.1 00:03:03.824 LIB libspdk_trace_parser.a 00:03:03.824 SYMLINK libspdk_util.so 00:03:03.824 SO libspdk_trace_parser.so.6.0 00:03:03.824 SYMLINK libspdk_trace_parser.so 00:03:03.824 CC lib/env_dpdk/env.o 00:03:03.824 CC lib/rdma_utils/rdma_utils.o 00:03:03.824 CC lib/env_dpdk/pci.o 00:03:03.824 CC lib/env_dpdk/memory.o 00:03:03.824 CC lib/env_dpdk/init.o 00:03:03.824 CC lib/idxd/idxd.o 00:03:03.824 CC lib/vmd/led.o 00:03:03.824 CC lib/vmd/vmd.o 00:03:03.824 CC lib/json/json_parse.o 00:03:03.824 CC lib/conf/conf.o 00:03:04.084 CC lib/json/json_util.o 00:03:04.084 LIB libspdk_conf.a 00:03:04.084 CC lib/env_dpdk/threads.o 00:03:04.084 SO libspdk_conf.so.6.0 00:03:04.345 LIB libspdk_rdma_utils.a 00:03:04.345 SO libspdk_rdma_utils.so.1.0 00:03:04.345 SYMLINK libspdk_conf.so 00:03:04.345 CC lib/env_dpdk/pci_ioat.o 00:03:04.345 SYMLINK libspdk_rdma_utils.so 00:03:04.345 CC lib/env_dpdk/pci_virtio.o 00:03:04.345 CC lib/json/json_write.o 00:03:04.345 CC lib/env_dpdk/pci_vmd.o 00:03:04.345 CC lib/env_dpdk/pci_idxd.o 00:03:04.345 CC lib/idxd/idxd_user.o 00:03:04.345 CC lib/idxd/idxd_kernel.o 00:03:04.605 CC lib/rdma_provider/common.o 00:03:04.605 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:04.605 CC lib/env_dpdk/pci_event.o 00:03:04.605 CC lib/env_dpdk/sigbus_handler.o 00:03:04.605 CC lib/env_dpdk/pci_dpdk.o 00:03:04.605 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:04.605 LIB libspdk_json.a 00:03:04.605 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:04.605 LIB libspdk_idxd.a 00:03:04.605 LIB libspdk_vmd.a 00:03:04.605 SO libspdk_json.so.6.0 00:03:04.605 LIB libspdk_rdma_provider.a 00:03:04.605 SO libspdk_vmd.so.6.0 00:03:04.605 SO libspdk_idxd.so.12.1 00:03:04.882 SO libspdk_rdma_provider.so.7.0 00:03:04.882 SYMLINK libspdk_json.so 00:03:04.882 SYMLINK libspdk_vmd.so 00:03:04.882 SYMLINK libspdk_idxd.so 00:03:04.882 SYMLINK libspdk_rdma_provider.so 00:03:05.170 CC lib/jsonrpc/jsonrpc_server.o 00:03:05.170 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:05.170 CC lib/jsonrpc/jsonrpc_client.o 00:03:05.170 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:05.430 LIB libspdk_jsonrpc.a 00:03:05.430 SO libspdk_jsonrpc.so.6.0 00:03:05.430 LIB libspdk_env_dpdk.a 00:03:05.430 SYMLINK libspdk_jsonrpc.so 00:03:05.689 SO libspdk_env_dpdk.so.15.1 00:03:05.689 SYMLINK libspdk_env_dpdk.so 00:03:05.948 CC lib/rpc/rpc.o 00:03:06.206 LIB libspdk_rpc.a 00:03:06.206 SO libspdk_rpc.so.6.0 00:03:06.206 SYMLINK libspdk_rpc.so 00:03:06.776 CC lib/keyring/keyring.o 00:03:06.776 CC lib/keyring/keyring_rpc.o 00:03:06.776 CC lib/notify/notify.o 00:03:06.776 CC lib/notify/notify_rpc.o 00:03:06.776 CC lib/trace/trace_rpc.o 00:03:06.776 CC lib/trace/trace.o 00:03:06.776 CC lib/trace/trace_flags.o 00:03:06.776 LIB libspdk_notify.a 00:03:06.776 SO libspdk_notify.so.6.0 00:03:06.776 LIB libspdk_keyring.a 00:03:07.035 SYMLINK libspdk_notify.so 00:03:07.036 SO libspdk_keyring.so.2.0 00:03:07.036 LIB libspdk_trace.a 00:03:07.036 SO libspdk_trace.so.11.0 00:03:07.036 SYMLINK libspdk_keyring.so 00:03:07.036 SYMLINK libspdk_trace.so 00:03:07.604 CC lib/sock/sock.o 00:03:07.604 CC lib/sock/sock_rpc.o 00:03:07.604 CC lib/thread/thread.o 00:03:07.604 CC lib/thread/iobuf.o 00:03:07.863 LIB libspdk_sock.a 00:03:07.863 SO libspdk_sock.so.10.0 00:03:08.122 SYMLINK libspdk_sock.so 00:03:08.382 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:08.382 CC lib/nvme/nvme_ctrlr.o 00:03:08.382 CC lib/nvme/nvme_fabric.o 00:03:08.382 CC lib/nvme/nvme_ns_cmd.o 00:03:08.382 CC lib/nvme/nvme_ns.o 00:03:08.382 CC lib/nvme/nvme_pcie.o 00:03:08.382 CC lib/nvme/nvme_pcie_common.o 00:03:08.382 CC lib/nvme/nvme.o 00:03:08.382 CC lib/nvme/nvme_qpair.o 00:03:09.320 LIB libspdk_thread.a 00:03:09.320 CC lib/nvme/nvme_quirks.o 00:03:09.320 SO libspdk_thread.so.11.0 00:03:09.320 CC lib/nvme/nvme_transport.o 00:03:09.320 CC lib/nvme/nvme_discovery.o 00:03:09.320 SYMLINK libspdk_thread.so 00:03:09.320 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:09.320 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:09.320 CC lib/nvme/nvme_tcp.o 00:03:09.320 CC lib/nvme/nvme_opal.o 00:03:09.579 CC lib/accel/accel.o 00:03:09.579 CC lib/accel/accel_rpc.o 00:03:09.579 CC lib/accel/accel_sw.o 00:03:09.839 CC lib/blob/blobstore.o 00:03:09.839 CC lib/nvme/nvme_io_msg.o 00:03:10.098 CC lib/init/json_config.o 00:03:10.098 CC lib/virtio/virtio.o 00:03:10.098 CC lib/virtio/virtio_vhost_user.o 00:03:10.098 CC lib/vfu_tgt/tgt_endpoint.o 00:03:10.098 CC lib/fsdev/fsdev.o 00:03:10.098 CC lib/init/subsystem.o 00:03:10.357 CC lib/fsdev/fsdev_io.o 00:03:10.357 CC lib/vfu_tgt/tgt_rpc.o 00:03:10.357 CC lib/virtio/virtio_vfio_user.o 00:03:10.357 CC lib/init/subsystem_rpc.o 00:03:10.616 LIB libspdk_vfu_tgt.a 00:03:10.616 CC lib/init/rpc.o 00:03:10.616 CC lib/fsdev/fsdev_rpc.o 00:03:10.616 SO libspdk_vfu_tgt.so.3.0 00:03:10.616 CC lib/virtio/virtio_pci.o 00:03:10.616 SYMLINK libspdk_vfu_tgt.so 00:03:10.616 CC lib/nvme/nvme_poll_group.o 00:03:10.616 LIB libspdk_accel.a 00:03:10.616 LIB libspdk_init.a 00:03:10.616 CC lib/nvme/nvme_zns.o 00:03:10.616 SO libspdk_accel.so.16.0 00:03:10.616 SO libspdk_init.so.6.0 00:03:10.616 CC lib/nvme/nvme_stubs.o 00:03:10.874 SYMLINK libspdk_accel.so 00:03:10.874 SYMLINK libspdk_init.so 00:03:10.874 CC lib/nvme/nvme_auth.o 00:03:10.874 LIB libspdk_fsdev.a 00:03:10.874 SO libspdk_fsdev.so.2.0 00:03:10.874 CC lib/nvme/nvme_cuse.o 00:03:10.874 CC lib/nvme/nvme_vfio_user.o 00:03:10.874 LIB libspdk_virtio.a 00:03:10.874 CC lib/bdev/bdev.o 00:03:10.874 SYMLINK libspdk_fsdev.so 00:03:10.874 CC lib/bdev/bdev_rpc.o 00:03:10.874 SO libspdk_virtio.so.7.0 00:03:11.134 SYMLINK libspdk_virtio.so 00:03:11.134 CC lib/bdev/bdev_zone.o 00:03:11.134 CC lib/bdev/part.o 00:03:11.134 CC lib/bdev/scsi_nvme.o 00:03:11.394 CC lib/nvme/nvme_rdma.o 00:03:11.394 CC lib/event/app.o 00:03:11.394 CC lib/blob/request.o 00:03:11.394 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:11.394 CC lib/event/reactor.o 00:03:11.653 CC lib/event/log_rpc.o 00:03:11.653 CC lib/event/app_rpc.o 00:03:11.653 CC lib/event/scheduler_static.o 00:03:11.653 CC lib/blob/zeroes.o 00:03:11.913 CC lib/blob/blob_bs_dev.o 00:03:11.913 LIB libspdk_event.a 00:03:12.172 SO libspdk_event.so.14.0 00:03:12.172 LIB libspdk_fuse_dispatcher.a 00:03:12.172 SYMLINK libspdk_event.so 00:03:12.172 SO libspdk_fuse_dispatcher.so.1.0 00:03:12.172 SYMLINK libspdk_fuse_dispatcher.so 00:03:12.741 LIB libspdk_nvme.a 00:03:13.001 SO libspdk_nvme.so.15.0 00:03:13.260 SYMLINK libspdk_nvme.so 00:03:13.520 LIB libspdk_blob.a 00:03:13.520 SO libspdk_blob.so.11.0 00:03:13.780 SYMLINK libspdk_blob.so 00:03:13.780 LIB libspdk_bdev.a 00:03:13.780 SO libspdk_bdev.so.17.0 00:03:14.039 SYMLINK libspdk_bdev.so 00:03:14.039 CC lib/lvol/lvol.o 00:03:14.039 CC lib/blobfs/blobfs.o 00:03:14.040 CC lib/blobfs/tree.o 00:03:14.301 CC lib/nbd/nbd.o 00:03:14.301 CC lib/nvmf/ctrlr.o 00:03:14.301 CC lib/nbd/nbd_rpc.o 00:03:14.301 CC lib/nvmf/ctrlr_discovery.o 00:03:14.301 CC lib/ftl/ftl_core.o 00:03:14.301 CC lib/ublk/ublk.o 00:03:14.301 CC lib/scsi/dev.o 00:03:14.301 CC lib/scsi/lun.o 00:03:14.301 CC lib/scsi/port.o 00:03:14.562 CC lib/ublk/ublk_rpc.o 00:03:14.562 CC lib/nvmf/ctrlr_bdev.o 00:03:14.562 CC lib/scsi/scsi.o 00:03:14.562 CC lib/ftl/ftl_init.o 00:03:14.562 CC lib/ftl/ftl_layout.o 00:03:14.562 LIB libspdk_nbd.a 00:03:14.562 SO libspdk_nbd.so.7.0 00:03:14.822 CC lib/scsi/scsi_bdev.o 00:03:14.822 CC lib/ftl/ftl_debug.o 00:03:14.822 SYMLINK libspdk_nbd.so 00:03:14.822 CC lib/ftl/ftl_io.o 00:03:14.822 CC lib/ftl/ftl_sb.o 00:03:14.822 LIB libspdk_ublk.a 00:03:14.822 SO libspdk_ublk.so.3.0 00:03:15.081 SYMLINK libspdk_ublk.so 00:03:15.081 CC lib/ftl/ftl_l2p.o 00:03:15.081 CC lib/ftl/ftl_l2p_flat.o 00:03:15.081 LIB libspdk_blobfs.a 00:03:15.081 CC lib/ftl/ftl_nv_cache.o 00:03:15.081 SO libspdk_blobfs.so.10.0 00:03:15.081 CC lib/nvmf/subsystem.o 00:03:15.081 CC lib/scsi/scsi_pr.o 00:03:15.081 SYMLINK libspdk_blobfs.so 00:03:15.081 CC lib/scsi/scsi_rpc.o 00:03:15.081 LIB libspdk_lvol.a 00:03:15.081 SO libspdk_lvol.so.10.0 00:03:15.081 CC lib/nvmf/nvmf.o 00:03:15.081 CC lib/nvmf/nvmf_rpc.o 00:03:15.340 SYMLINK libspdk_lvol.so 00:03:15.340 CC lib/nvmf/transport.o 00:03:15.340 CC lib/scsi/task.o 00:03:15.340 CC lib/ftl/ftl_band.o 00:03:15.340 CC lib/nvmf/tcp.o 00:03:15.340 CC lib/ftl/ftl_band_ops.o 00:03:15.599 LIB libspdk_scsi.a 00:03:15.599 SO libspdk_scsi.so.9.0 00:03:15.599 SYMLINK libspdk_scsi.so 00:03:15.599 CC lib/nvmf/stubs.o 00:03:15.599 CC lib/nvmf/mdns_server.o 00:03:15.859 CC lib/nvmf/vfio_user.o 00:03:16.119 CC lib/ftl/ftl_writer.o 00:03:16.119 CC lib/ftl/ftl_rq.o 00:03:16.119 CC lib/ftl/ftl_reloc.o 00:03:16.119 CC lib/ftl/ftl_l2p_cache.o 00:03:16.119 CC lib/nvmf/rdma.o 00:03:16.119 CC lib/iscsi/conn.o 00:03:16.379 CC lib/ftl/ftl_p2l.o 00:03:16.379 CC lib/vhost/vhost.o 00:03:16.379 CC lib/vhost/vhost_rpc.o 00:03:16.379 CC lib/vhost/vhost_scsi.o 00:03:16.379 CC lib/vhost/vhost_blk.o 00:03:16.638 CC lib/vhost/rte_vhost_user.o 00:03:16.638 CC lib/ftl/ftl_p2l_log.o 00:03:16.897 CC lib/iscsi/init_grp.o 00:03:16.897 CC lib/nvmf/auth.o 00:03:17.156 CC lib/ftl/mngt/ftl_mngt.o 00:03:17.156 CC lib/iscsi/iscsi.o 00:03:17.156 CC lib/iscsi/param.o 00:03:17.156 CC lib/iscsi/portal_grp.o 00:03:17.416 CC lib/iscsi/tgt_node.o 00:03:17.416 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:17.416 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:17.416 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:17.416 CC lib/iscsi/iscsi_subsystem.o 00:03:17.675 CC lib/iscsi/iscsi_rpc.o 00:03:17.675 LIB libspdk_vhost.a 00:03:17.675 CC lib/iscsi/task.o 00:03:17.675 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:17.675 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:17.675 SO libspdk_vhost.so.8.0 00:03:17.935 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:17.935 SYMLINK libspdk_vhost.so 00:03:17.935 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:17.935 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:17.935 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:17.935 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:17.935 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:17.935 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:17.935 CC lib/ftl/utils/ftl_conf.o 00:03:17.935 CC lib/ftl/utils/ftl_md.o 00:03:18.194 CC lib/ftl/utils/ftl_mempool.o 00:03:18.194 CC lib/ftl/utils/ftl_bitmap.o 00:03:18.194 CC lib/ftl/utils/ftl_property.o 00:03:18.194 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:18.194 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:18.194 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:18.194 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:18.453 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:18.453 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:18.453 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:18.453 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:18.453 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:18.453 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:18.453 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:18.453 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:18.453 LIB libspdk_nvmf.a 00:03:18.453 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:18.711 CC lib/ftl/base/ftl_base_dev.o 00:03:18.711 CC lib/ftl/base/ftl_base_bdev.o 00:03:18.711 CC lib/ftl/ftl_trace.o 00:03:18.711 LIB libspdk_iscsi.a 00:03:18.711 SO libspdk_nvmf.so.20.0 00:03:18.711 SO libspdk_iscsi.so.8.0 00:03:18.970 LIB libspdk_ftl.a 00:03:18.970 SYMLINK libspdk_iscsi.so 00:03:18.970 SYMLINK libspdk_nvmf.so 00:03:19.229 SO libspdk_ftl.so.9.0 00:03:19.489 SYMLINK libspdk_ftl.so 00:03:20.058 CC module/vfu_device/vfu_virtio.o 00:03:20.058 CC module/env_dpdk/env_dpdk_rpc.o 00:03:20.058 CC module/keyring/file/keyring.o 00:03:20.058 CC module/keyring/linux/keyring.o 00:03:20.058 CC module/sock/posix/posix.o 00:03:20.058 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:20.058 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:20.058 CC module/fsdev/aio/fsdev_aio.o 00:03:20.058 CC module/accel/error/accel_error.o 00:03:20.058 CC module/blob/bdev/blob_bdev.o 00:03:20.058 LIB libspdk_env_dpdk_rpc.a 00:03:20.058 SO libspdk_env_dpdk_rpc.so.6.0 00:03:20.317 CC module/keyring/file/keyring_rpc.o 00:03:20.317 CC module/keyring/linux/keyring_rpc.o 00:03:20.317 SYMLINK libspdk_env_dpdk_rpc.so 00:03:20.317 CC module/vfu_device/vfu_virtio_blk.o 00:03:20.317 LIB libspdk_scheduler_dpdk_governor.a 00:03:20.317 LIB libspdk_scheduler_dynamic.a 00:03:20.317 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:20.317 CC module/accel/error/accel_error_rpc.o 00:03:20.317 SO libspdk_scheduler_dynamic.so.4.0 00:03:20.317 LIB libspdk_keyring_file.a 00:03:20.317 LIB libspdk_keyring_linux.a 00:03:20.317 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:20.317 CC module/vfu_device/vfu_virtio_scsi.o 00:03:20.317 LIB libspdk_blob_bdev.a 00:03:20.317 SO libspdk_keyring_file.so.2.0 00:03:20.317 SO libspdk_keyring_linux.so.1.0 00:03:20.317 SYMLINK libspdk_scheduler_dynamic.so 00:03:20.317 SO libspdk_blob_bdev.so.11.0 00:03:20.317 LIB libspdk_accel_error.a 00:03:20.317 SYMLINK libspdk_keyring_file.so 00:03:20.317 SYMLINK libspdk_keyring_linux.so 00:03:20.576 CC module/vfu_device/vfu_virtio_rpc.o 00:03:20.576 SYMLINK libspdk_blob_bdev.so 00:03:20.576 SO libspdk_accel_error.so.2.0 00:03:20.576 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:20.576 SYMLINK libspdk_accel_error.so 00:03:20.576 CC module/fsdev/aio/linux_aio_mgr.o 00:03:20.576 CC module/scheduler/gscheduler/gscheduler.o 00:03:20.576 CC module/sock/uring/uring.o 00:03:20.576 CC module/accel/ioat/accel_ioat.o 00:03:20.836 CC module/accel/ioat/accel_ioat_rpc.o 00:03:20.836 CC module/vfu_device/vfu_virtio_fs.o 00:03:20.836 LIB libspdk_scheduler_gscheduler.a 00:03:20.836 LIB libspdk_fsdev_aio.a 00:03:20.836 SO libspdk_scheduler_gscheduler.so.4.0 00:03:20.836 CC module/accel/dsa/accel_dsa.o 00:03:20.836 SO libspdk_fsdev_aio.so.1.0 00:03:20.836 CC module/accel/iaa/accel_iaa.o 00:03:20.836 SYMLINK libspdk_scheduler_gscheduler.so 00:03:20.836 CC module/accel/iaa/accel_iaa_rpc.o 00:03:20.836 LIB libspdk_sock_posix.a 00:03:20.836 SYMLINK libspdk_fsdev_aio.so 00:03:20.836 LIB libspdk_accel_ioat.a 00:03:20.836 CC module/accel/dsa/accel_dsa_rpc.o 00:03:20.836 SO libspdk_sock_posix.so.6.0 00:03:20.836 SO libspdk_accel_ioat.so.6.0 00:03:21.096 LIB libspdk_vfu_device.a 00:03:21.096 CC module/bdev/delay/vbdev_delay.o 00:03:21.096 SYMLINK libspdk_sock_posix.so 00:03:21.096 SYMLINK libspdk_accel_ioat.so 00:03:21.096 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:21.096 SO libspdk_vfu_device.so.3.0 00:03:21.096 LIB libspdk_accel_iaa.a 00:03:21.096 SO libspdk_accel_iaa.so.3.0 00:03:21.096 LIB libspdk_accel_dsa.a 00:03:21.096 SO libspdk_accel_dsa.so.5.0 00:03:21.096 SYMLINK libspdk_vfu_device.so 00:03:21.096 SYMLINK libspdk_accel_iaa.so 00:03:21.096 CC module/blobfs/bdev/blobfs_bdev.o 00:03:21.096 CC module/bdev/error/vbdev_error.o 00:03:21.096 CC module/bdev/gpt/gpt.o 00:03:21.096 SYMLINK libspdk_accel_dsa.so 00:03:21.096 CC module/bdev/gpt/vbdev_gpt.o 00:03:21.355 CC module/bdev/lvol/vbdev_lvol.o 00:03:21.355 CC module/bdev/nvme/bdev_nvme.o 00:03:21.355 CC module/bdev/null/bdev_null.o 00:03:21.355 CC module/bdev/malloc/bdev_malloc.o 00:03:21.355 LIB libspdk_bdev_delay.a 00:03:21.355 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:21.355 SO libspdk_bdev_delay.so.6.0 00:03:21.355 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:21.355 LIB libspdk_sock_uring.a 00:03:21.355 SO libspdk_sock_uring.so.5.0 00:03:21.355 SYMLINK libspdk_bdev_delay.so 00:03:21.615 CC module/bdev/error/vbdev_error_rpc.o 00:03:21.615 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:21.615 LIB libspdk_bdev_gpt.a 00:03:21.615 SYMLINK libspdk_sock_uring.so 00:03:21.615 CC module/bdev/nvme/nvme_rpc.o 00:03:21.615 LIB libspdk_blobfs_bdev.a 00:03:21.615 SO libspdk_bdev_gpt.so.6.0 00:03:21.615 SO libspdk_blobfs_bdev.so.6.0 00:03:21.615 SYMLINK libspdk_bdev_gpt.so 00:03:21.615 CC module/bdev/nvme/bdev_mdns_client.o 00:03:21.615 CC module/bdev/null/bdev_null_rpc.o 00:03:21.615 SYMLINK libspdk_blobfs_bdev.so 00:03:21.615 CC module/bdev/nvme/vbdev_opal.o 00:03:21.615 LIB libspdk_bdev_error.a 00:03:21.615 SO libspdk_bdev_error.so.6.0 00:03:21.874 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:21.874 SYMLINK libspdk_bdev_error.so 00:03:21.874 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:21.874 LIB libspdk_bdev_null.a 00:03:21.874 SO libspdk_bdev_null.so.6.0 00:03:21.874 LIB libspdk_bdev_lvol.a 00:03:21.874 SYMLINK libspdk_bdev_null.so 00:03:21.874 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:21.874 LIB libspdk_bdev_malloc.a 00:03:21.874 CC module/bdev/passthru/vbdev_passthru.o 00:03:21.874 SO libspdk_bdev_lvol.so.6.0 00:03:21.874 CC module/bdev/split/vbdev_split.o 00:03:21.874 CC module/bdev/raid/bdev_raid.o 00:03:21.874 SO libspdk_bdev_malloc.so.6.0 00:03:22.133 SYMLINK libspdk_bdev_lvol.so 00:03:22.133 CC module/bdev/split/vbdev_split_rpc.o 00:03:22.133 SYMLINK libspdk_bdev_malloc.so 00:03:22.133 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:22.133 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:22.133 LIB libspdk_bdev_split.a 00:03:22.133 CC module/bdev/uring/bdev_uring.o 00:03:22.133 SO libspdk_bdev_split.so.6.0 00:03:22.133 CC module/bdev/aio/bdev_aio.o 00:03:22.133 CC module/bdev/aio/bdev_aio_rpc.o 00:03:22.133 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:22.392 CC module/bdev/ftl/bdev_ftl.o 00:03:22.392 SYMLINK libspdk_bdev_split.so 00:03:22.392 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:22.392 CC module/bdev/iscsi/bdev_iscsi.o 00:03:22.392 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:22.392 LIB libspdk_bdev_passthru.a 00:03:22.392 LIB libspdk_bdev_zone_block.a 00:03:22.392 SO libspdk_bdev_passthru.so.6.0 00:03:22.392 SO libspdk_bdev_zone_block.so.6.0 00:03:22.392 CC module/bdev/uring/bdev_uring_rpc.o 00:03:22.666 SYMLINK libspdk_bdev_passthru.so 00:03:22.666 CC module/bdev/raid/bdev_raid_rpc.o 00:03:22.667 SYMLINK libspdk_bdev_zone_block.so 00:03:22.667 CC module/bdev/raid/bdev_raid_sb.o 00:03:22.667 LIB libspdk_bdev_ftl.a 00:03:22.667 CC module/bdev/raid/raid0.o 00:03:22.667 LIB libspdk_bdev_aio.a 00:03:22.667 SO libspdk_bdev_ftl.so.6.0 00:03:22.667 SO libspdk_bdev_aio.so.6.0 00:03:22.667 LIB libspdk_bdev_uring.a 00:03:22.667 SYMLINK libspdk_bdev_ftl.so 00:03:22.667 CC module/bdev/raid/raid1.o 00:03:22.667 LIB libspdk_bdev_iscsi.a 00:03:22.667 SO libspdk_bdev_uring.so.6.0 00:03:22.667 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:22.667 SYMLINK libspdk_bdev_aio.so 00:03:22.667 SO libspdk_bdev_iscsi.so.6.0 00:03:22.970 CC module/bdev/raid/concat.o 00:03:22.970 SYMLINK libspdk_bdev_uring.so 00:03:22.970 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:22.970 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:22.970 SYMLINK libspdk_bdev_iscsi.so 00:03:23.229 LIB libspdk_bdev_raid.a 00:03:23.230 SO libspdk_bdev_raid.so.6.0 00:03:23.230 SYMLINK libspdk_bdev_raid.so 00:03:23.230 LIB libspdk_bdev_virtio.a 00:03:23.489 SO libspdk_bdev_virtio.so.6.0 00:03:23.489 SYMLINK libspdk_bdev_virtio.so 00:03:24.426 LIB libspdk_bdev_nvme.a 00:03:24.426 SO libspdk_bdev_nvme.so.7.1 00:03:24.426 SYMLINK libspdk_bdev_nvme.so 00:03:24.993 CC module/event/subsystems/iobuf/iobuf.o 00:03:24.993 CC module/event/subsystems/vmd/vmd.o 00:03:24.993 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:24.993 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:24.993 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:24.993 CC module/event/subsystems/fsdev/fsdev.o 00:03:24.993 CC module/event/subsystems/scheduler/scheduler.o 00:03:24.993 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:24.993 CC module/event/subsystems/sock/sock.o 00:03:24.993 CC module/event/subsystems/keyring/keyring.o 00:03:25.251 LIB libspdk_event_vmd.a 00:03:25.251 LIB libspdk_event_fsdev.a 00:03:25.251 LIB libspdk_event_keyring.a 00:03:25.251 LIB libspdk_event_vhost_blk.a 00:03:25.251 LIB libspdk_event_vfu_tgt.a 00:03:25.251 LIB libspdk_event_scheduler.a 00:03:25.251 LIB libspdk_event_sock.a 00:03:25.251 LIB libspdk_event_iobuf.a 00:03:25.251 SO libspdk_event_vmd.so.6.0 00:03:25.251 SO libspdk_event_vhost_blk.so.3.0 00:03:25.251 SO libspdk_event_keyring.so.1.0 00:03:25.251 SO libspdk_event_fsdev.so.1.0 00:03:25.251 SO libspdk_event_scheduler.so.4.0 00:03:25.251 SO libspdk_event_vfu_tgt.so.3.0 00:03:25.251 SO libspdk_event_sock.so.5.0 00:03:25.251 SO libspdk_event_iobuf.so.3.0 00:03:25.251 SYMLINK libspdk_event_keyring.so 00:03:25.251 SYMLINK libspdk_event_vhost_blk.so 00:03:25.251 SYMLINK libspdk_event_fsdev.so 00:03:25.251 SYMLINK libspdk_event_scheduler.so 00:03:25.251 SYMLINK libspdk_event_vfu_tgt.so 00:03:25.251 SYMLINK libspdk_event_vmd.so 00:03:25.251 SYMLINK libspdk_event_sock.so 00:03:25.251 SYMLINK libspdk_event_iobuf.so 00:03:25.819 CC module/event/subsystems/accel/accel.o 00:03:25.819 LIB libspdk_event_accel.a 00:03:25.819 SO libspdk_event_accel.so.6.0 00:03:26.078 SYMLINK libspdk_event_accel.so 00:03:26.338 CC module/event/subsystems/bdev/bdev.o 00:03:26.598 LIB libspdk_event_bdev.a 00:03:26.598 SO libspdk_event_bdev.so.6.0 00:03:26.598 SYMLINK libspdk_event_bdev.so 00:03:26.857 CC module/event/subsystems/ublk/ublk.o 00:03:26.857 CC module/event/subsystems/nbd/nbd.o 00:03:26.857 CC module/event/subsystems/scsi/scsi.o 00:03:26.857 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:26.857 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:27.116 LIB libspdk_event_nbd.a 00:03:27.116 LIB libspdk_event_ublk.a 00:03:27.116 LIB libspdk_event_scsi.a 00:03:27.116 SO libspdk_event_nbd.so.6.0 00:03:27.116 SO libspdk_event_ublk.so.3.0 00:03:27.116 SO libspdk_event_scsi.so.6.0 00:03:27.116 SYMLINK libspdk_event_nbd.so 00:03:27.116 SYMLINK libspdk_event_ublk.so 00:03:27.116 SYMLINK libspdk_event_scsi.so 00:03:27.116 LIB libspdk_event_nvmf.a 00:03:27.375 SO libspdk_event_nvmf.so.6.0 00:03:27.375 SYMLINK libspdk_event_nvmf.so 00:03:27.634 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:27.634 CC module/event/subsystems/iscsi/iscsi.o 00:03:27.634 LIB libspdk_event_vhost_scsi.a 00:03:27.893 SO libspdk_event_vhost_scsi.so.3.0 00:03:27.893 LIB libspdk_event_iscsi.a 00:03:27.893 SO libspdk_event_iscsi.so.6.0 00:03:27.893 SYMLINK libspdk_event_vhost_scsi.so 00:03:27.893 SYMLINK libspdk_event_iscsi.so 00:03:28.152 SO libspdk.so.6.0 00:03:28.152 SYMLINK libspdk.so 00:03:28.411 CXX app/trace/trace.o 00:03:28.411 CC app/spdk_nvme_perf/perf.o 00:03:28.411 CC app/spdk_lspci/spdk_lspci.o 00:03:28.411 CC app/trace_record/trace_record.o 00:03:28.411 CC app/nvmf_tgt/nvmf_main.o 00:03:28.669 CC app/spdk_tgt/spdk_tgt.o 00:03:28.669 CC app/iscsi_tgt/iscsi_tgt.o 00:03:28.669 CC test/thread/poller_perf/poller_perf.o 00:03:28.669 CC examples/ioat/perf/perf.o 00:03:28.669 CC examples/util/zipf/zipf.o 00:03:28.669 LINK spdk_lspci 00:03:28.669 LINK nvmf_tgt 00:03:28.669 LINK poller_perf 00:03:28.669 LINK zipf 00:03:28.669 LINK spdk_trace_record 00:03:28.669 LINK spdk_tgt 00:03:28.669 LINK iscsi_tgt 00:03:28.928 LINK ioat_perf 00:03:28.928 LINK spdk_trace 00:03:28.928 CC app/spdk_nvme_identify/identify.o 00:03:29.186 TEST_HEADER include/spdk/accel.h 00:03:29.186 TEST_HEADER include/spdk/accel_module.h 00:03:29.186 TEST_HEADER include/spdk/assert.h 00:03:29.186 TEST_HEADER include/spdk/barrier.h 00:03:29.186 TEST_HEADER include/spdk/base64.h 00:03:29.186 TEST_HEADER include/spdk/bdev.h 00:03:29.186 TEST_HEADER include/spdk/bdev_module.h 00:03:29.186 TEST_HEADER include/spdk/bdev_zone.h 00:03:29.186 TEST_HEADER include/spdk/bit_array.h 00:03:29.186 TEST_HEADER include/spdk/bit_pool.h 00:03:29.186 TEST_HEADER include/spdk/blob_bdev.h 00:03:29.186 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:29.186 TEST_HEADER include/spdk/blobfs.h 00:03:29.186 TEST_HEADER include/spdk/blob.h 00:03:29.186 TEST_HEADER include/spdk/conf.h 00:03:29.186 TEST_HEADER include/spdk/config.h 00:03:29.186 TEST_HEADER include/spdk/cpuset.h 00:03:29.186 TEST_HEADER include/spdk/crc16.h 00:03:29.186 TEST_HEADER include/spdk/crc32.h 00:03:29.186 TEST_HEADER include/spdk/crc64.h 00:03:29.186 TEST_HEADER include/spdk/dif.h 00:03:29.186 TEST_HEADER include/spdk/dma.h 00:03:29.186 TEST_HEADER include/spdk/endian.h 00:03:29.186 TEST_HEADER include/spdk/env_dpdk.h 00:03:29.186 TEST_HEADER include/spdk/env.h 00:03:29.186 TEST_HEADER include/spdk/event.h 00:03:29.186 TEST_HEADER include/spdk/fd_group.h 00:03:29.186 TEST_HEADER include/spdk/fd.h 00:03:29.186 TEST_HEADER include/spdk/file.h 00:03:29.186 TEST_HEADER include/spdk/fsdev.h 00:03:29.186 CC test/dma/test_dma/test_dma.o 00:03:29.186 TEST_HEADER include/spdk/fsdev_module.h 00:03:29.186 TEST_HEADER include/spdk/ftl.h 00:03:29.187 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:29.187 CC examples/ioat/verify/verify.o 00:03:29.187 TEST_HEADER include/spdk/gpt_spec.h 00:03:29.187 TEST_HEADER include/spdk/hexlify.h 00:03:29.187 CC app/spdk_nvme_discover/discovery_aer.o 00:03:29.187 TEST_HEADER include/spdk/histogram_data.h 00:03:29.187 TEST_HEADER include/spdk/idxd.h 00:03:29.187 TEST_HEADER include/spdk/idxd_spec.h 00:03:29.187 TEST_HEADER include/spdk/init.h 00:03:29.187 TEST_HEADER include/spdk/ioat.h 00:03:29.187 TEST_HEADER include/spdk/ioat_spec.h 00:03:29.187 TEST_HEADER include/spdk/iscsi_spec.h 00:03:29.187 TEST_HEADER include/spdk/json.h 00:03:29.187 TEST_HEADER include/spdk/jsonrpc.h 00:03:29.187 TEST_HEADER include/spdk/keyring.h 00:03:29.187 TEST_HEADER include/spdk/keyring_module.h 00:03:29.187 TEST_HEADER include/spdk/likely.h 00:03:29.187 TEST_HEADER include/spdk/log.h 00:03:29.187 TEST_HEADER include/spdk/lvol.h 00:03:29.187 TEST_HEADER include/spdk/md5.h 00:03:29.187 CC test/app/bdev_svc/bdev_svc.o 00:03:29.187 TEST_HEADER include/spdk/memory.h 00:03:29.187 TEST_HEADER include/spdk/mmio.h 00:03:29.187 TEST_HEADER include/spdk/nbd.h 00:03:29.187 TEST_HEADER include/spdk/net.h 00:03:29.187 TEST_HEADER include/spdk/notify.h 00:03:29.187 TEST_HEADER include/spdk/nvme.h 00:03:29.187 TEST_HEADER include/spdk/nvme_intel.h 00:03:29.187 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:29.187 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:29.187 TEST_HEADER include/spdk/nvme_spec.h 00:03:29.187 TEST_HEADER include/spdk/nvme_zns.h 00:03:29.187 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:29.187 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:29.187 TEST_HEADER include/spdk/nvmf.h 00:03:29.187 TEST_HEADER include/spdk/nvmf_spec.h 00:03:29.187 TEST_HEADER include/spdk/nvmf_transport.h 00:03:29.187 TEST_HEADER include/spdk/opal.h 00:03:29.187 TEST_HEADER include/spdk/opal_spec.h 00:03:29.187 TEST_HEADER include/spdk/pci_ids.h 00:03:29.187 TEST_HEADER include/spdk/pipe.h 00:03:29.187 TEST_HEADER include/spdk/queue.h 00:03:29.187 TEST_HEADER include/spdk/reduce.h 00:03:29.187 TEST_HEADER include/spdk/rpc.h 00:03:29.187 TEST_HEADER include/spdk/scheduler.h 00:03:29.187 TEST_HEADER include/spdk/scsi.h 00:03:29.187 TEST_HEADER include/spdk/scsi_spec.h 00:03:29.187 CC test/app/histogram_perf/histogram_perf.o 00:03:29.187 TEST_HEADER include/spdk/sock.h 00:03:29.187 TEST_HEADER include/spdk/stdinc.h 00:03:29.187 TEST_HEADER include/spdk/string.h 00:03:29.187 TEST_HEADER include/spdk/thread.h 00:03:29.187 TEST_HEADER include/spdk/trace.h 00:03:29.187 TEST_HEADER include/spdk/trace_parser.h 00:03:29.187 TEST_HEADER include/spdk/tree.h 00:03:29.187 TEST_HEADER include/spdk/ublk.h 00:03:29.187 TEST_HEADER include/spdk/util.h 00:03:29.187 TEST_HEADER include/spdk/uuid.h 00:03:29.187 TEST_HEADER include/spdk/version.h 00:03:29.187 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:29.187 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:29.187 TEST_HEADER include/spdk/vhost.h 00:03:29.187 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:29.187 TEST_HEADER include/spdk/vmd.h 00:03:29.187 TEST_HEADER include/spdk/xor.h 00:03:29.187 TEST_HEADER include/spdk/zipf.h 00:03:29.187 CXX test/cpp_headers/accel.o 00:03:29.187 CC test/env/mem_callbacks/mem_callbacks.o 00:03:29.445 LINK spdk_nvme_discover 00:03:29.445 LINK verify 00:03:29.445 LINK bdev_svc 00:03:29.445 LINK histogram_perf 00:03:29.445 CXX test/cpp_headers/accel_module.o 00:03:29.703 LINK spdk_nvme_perf 00:03:29.703 CXX test/cpp_headers/assert.o 00:03:29.703 CXX test/cpp_headers/barrier.o 00:03:29.703 LINK test_dma 00:03:29.703 CC test/app/jsoncat/jsoncat.o 00:03:29.703 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:29.703 CC app/spdk_top/spdk_top.o 00:03:29.703 LINK nvme_fuzz 00:03:29.961 CXX test/cpp_headers/base64.o 00:03:29.961 LINK jsoncat 00:03:29.961 LINK mem_callbacks 00:03:29.961 CC test/app/stub/stub.o 00:03:29.961 LINK interrupt_tgt 00:03:29.961 CC test/env/vtophys/vtophys.o 00:03:29.961 LINK spdk_nvme_identify 00:03:29.961 CXX test/cpp_headers/bdev.o 00:03:29.961 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:30.220 LINK stub 00:03:30.220 LINK vtophys 00:03:30.220 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:30.220 CC examples/thread/thread/thread_ex.o 00:03:30.220 CC test/event/event_perf/event_perf.o 00:03:30.220 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:30.220 CC test/event/reactor/reactor.o 00:03:30.220 CXX test/cpp_headers/bdev_module.o 00:03:30.478 LINK event_perf 00:03:30.478 LINK reactor 00:03:30.478 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:30.478 CC app/vhost/vhost.o 00:03:30.478 CXX test/cpp_headers/bdev_zone.o 00:03:30.478 LINK thread 00:03:30.478 CC test/nvme/aer/aer.o 00:03:30.478 LINK env_dpdk_post_init 00:03:30.737 CC test/event/reactor_perf/reactor_perf.o 00:03:30.737 CC test/nvme/reset/reset.o 00:03:30.737 LINK vhost 00:03:30.737 CXX test/cpp_headers/bit_array.o 00:03:30.737 LINK vhost_fuzz 00:03:30.737 LINK spdk_top 00:03:30.737 LINK reactor_perf 00:03:30.737 LINK aer 00:03:30.737 CXX test/cpp_headers/bit_pool.o 00:03:30.996 CC test/env/memory/memory_ut.o 00:03:30.996 CC examples/sock/hello_world/hello_sock.o 00:03:30.996 LINK reset 00:03:30.996 CXX test/cpp_headers/blob_bdev.o 00:03:30.996 CC test/nvme/e2edp/nvme_dp.o 00:03:30.996 CC test/nvme/sgl/sgl.o 00:03:30.996 CC test/nvme/overhead/overhead.o 00:03:31.255 CC test/event/app_repeat/app_repeat.o 00:03:31.255 CC app/spdk_dd/spdk_dd.o 00:03:31.255 LINK hello_sock 00:03:31.255 CXX test/cpp_headers/blobfs_bdev.o 00:03:31.255 CC test/env/pci/pci_ut.o 00:03:31.255 LINK app_repeat 00:03:31.255 LINK sgl 00:03:31.255 LINK nvme_dp 00:03:31.514 CXX test/cpp_headers/blobfs.o 00:03:31.514 LINK overhead 00:03:31.514 CC examples/vmd/lsvmd/lsvmd.o 00:03:31.514 CXX test/cpp_headers/blob.o 00:03:31.514 LINK spdk_dd 00:03:31.773 CC test/event/scheduler/scheduler.o 00:03:31.773 CC test/nvme/startup/startup.o 00:03:31.773 CC test/nvme/err_injection/err_injection.o 00:03:31.773 LINK lsvmd 00:03:31.773 LINK pci_ut 00:03:31.773 CXX test/cpp_headers/conf.o 00:03:31.773 CC app/fio/nvme/fio_plugin.o 00:03:31.773 LINK startup 00:03:31.773 LINK err_injection 00:03:32.031 LINK scheduler 00:03:32.031 CXX test/cpp_headers/config.o 00:03:32.031 CXX test/cpp_headers/cpuset.o 00:03:32.031 CC examples/vmd/led/led.o 00:03:32.031 CC app/fio/bdev/fio_plugin.o 00:03:32.031 LINK iscsi_fuzz 00:03:32.031 LINK memory_ut 00:03:32.290 LINK led 00:03:32.290 CXX test/cpp_headers/crc16.o 00:03:32.290 CC examples/idxd/perf/perf.o 00:03:32.290 CC test/nvme/reserve/reserve.o 00:03:32.290 CC test/nvme/simple_copy/simple_copy.o 00:03:32.290 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:32.290 CXX test/cpp_headers/crc32.o 00:03:32.549 LINK reserve 00:03:32.549 CC test/nvme/connect_stress/connect_stress.o 00:03:32.549 LINK spdk_nvme 00:03:32.549 CXX test/cpp_headers/crc64.o 00:03:32.549 LINK simple_copy 00:03:32.549 CC examples/accel/perf/accel_perf.o 00:03:32.549 LINK idxd_perf 00:03:32.549 LINK spdk_bdev 00:03:32.549 LINK hello_fsdev 00:03:32.549 CC examples/blob/hello_world/hello_blob.o 00:03:32.809 LINK connect_stress 00:03:32.809 CXX test/cpp_headers/dif.o 00:03:32.809 CXX test/cpp_headers/dma.o 00:03:32.809 CC examples/blob/cli/blobcli.o 00:03:32.809 CC test/nvme/boot_partition/boot_partition.o 00:03:32.809 CC examples/nvme/hello_world/hello_world.o 00:03:32.809 CC examples/nvme/reconnect/reconnect.o 00:03:32.809 LINK hello_blob 00:03:32.809 CXX test/cpp_headers/endian.o 00:03:32.809 CC test/nvme/compliance/nvme_compliance.o 00:03:32.809 CC test/nvme/fused_ordering/fused_ordering.o 00:03:33.068 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:33.068 LINK boot_partition 00:03:33.068 CXX test/cpp_headers/env_dpdk.o 00:03:33.068 LINK hello_world 00:03:33.068 LINK accel_perf 00:03:33.068 LINK fused_ordering 00:03:33.068 LINK doorbell_aers 00:03:33.328 LINK reconnect 00:03:33.328 CXX test/cpp_headers/env.o 00:03:33.328 CC test/nvme/fdp/fdp.o 00:03:33.328 LINK blobcli 00:03:33.328 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:33.328 LINK nvme_compliance 00:03:33.328 CC examples/nvme/arbitration/arbitration.o 00:03:33.328 CXX test/cpp_headers/event.o 00:03:33.328 CC examples/nvme/hotplug/hotplug.o 00:03:33.328 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:33.328 CC examples/nvme/abort/abort.o 00:03:33.587 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:33.587 CXX test/cpp_headers/fd_group.o 00:03:33.587 CC test/nvme/cuse/cuse.o 00:03:33.587 CC examples/bdev/hello_world/hello_bdev.o 00:03:33.587 LINK cmb_copy 00:03:33.587 LINK fdp 00:03:33.587 LINK hotplug 00:03:33.846 CXX test/cpp_headers/fd.o 00:03:33.846 LINK arbitration 00:03:33.846 CXX test/cpp_headers/file.o 00:03:33.846 LINK pmr_persistence 00:03:33.846 CXX test/cpp_headers/fsdev.o 00:03:33.846 LINK abort 00:03:33.846 LINK nvme_manage 00:03:33.846 LINK hello_bdev 00:03:33.846 CXX test/cpp_headers/fsdev_module.o 00:03:33.846 CXX test/cpp_headers/ftl.o 00:03:33.846 CXX test/cpp_headers/fuse_dispatcher.o 00:03:33.846 CC examples/bdev/bdevperf/bdevperf.o 00:03:34.106 CXX test/cpp_headers/gpt_spec.o 00:03:34.106 CXX test/cpp_headers/hexlify.o 00:03:34.106 CC test/rpc_client/rpc_client_test.o 00:03:34.106 CXX test/cpp_headers/histogram_data.o 00:03:34.106 CXX test/cpp_headers/idxd.o 00:03:34.106 CXX test/cpp_headers/idxd_spec.o 00:03:34.106 CXX test/cpp_headers/init.o 00:03:34.106 CC test/accel/dif/dif.o 00:03:34.106 LINK rpc_client_test 00:03:34.364 CC test/blobfs/mkfs/mkfs.o 00:03:34.364 CXX test/cpp_headers/ioat.o 00:03:34.364 CXX test/cpp_headers/ioat_spec.o 00:03:34.364 CXX test/cpp_headers/iscsi_spec.o 00:03:34.364 CXX test/cpp_headers/json.o 00:03:34.364 CC test/lvol/esnap/esnap.o 00:03:34.364 CXX test/cpp_headers/jsonrpc.o 00:03:34.364 LINK mkfs 00:03:34.364 CXX test/cpp_headers/keyring.o 00:03:34.364 CXX test/cpp_headers/keyring_module.o 00:03:34.623 CXX test/cpp_headers/likely.o 00:03:34.623 CXX test/cpp_headers/log.o 00:03:34.623 CXX test/cpp_headers/lvol.o 00:03:34.623 CXX test/cpp_headers/md5.o 00:03:34.623 CXX test/cpp_headers/memory.o 00:03:34.623 CXX test/cpp_headers/mmio.o 00:03:34.623 CXX test/cpp_headers/nbd.o 00:03:34.623 CXX test/cpp_headers/net.o 00:03:34.623 CXX test/cpp_headers/notify.o 00:03:34.623 CXX test/cpp_headers/nvme.o 00:03:34.883 CXX test/cpp_headers/nvme_intel.o 00:03:34.883 CXX test/cpp_headers/nvme_ocssd.o 00:03:34.883 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:34.883 CXX test/cpp_headers/nvme_spec.o 00:03:34.883 CXX test/cpp_headers/nvme_zns.o 00:03:34.883 LINK bdevperf 00:03:34.883 CXX test/cpp_headers/nvmf_cmd.o 00:03:34.883 LINK dif 00:03:34.883 LINK cuse 00:03:34.883 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:34.883 CXX test/cpp_headers/nvmf.o 00:03:35.142 CXX test/cpp_headers/nvmf_spec.o 00:03:35.142 CXX test/cpp_headers/nvmf_transport.o 00:03:35.142 CXX test/cpp_headers/opal.o 00:03:35.142 CXX test/cpp_headers/opal_spec.o 00:03:35.142 CXX test/cpp_headers/pci_ids.o 00:03:35.142 CXX test/cpp_headers/pipe.o 00:03:35.142 CXX test/cpp_headers/queue.o 00:03:35.142 CXX test/cpp_headers/reduce.o 00:03:35.142 CXX test/cpp_headers/rpc.o 00:03:35.142 CXX test/cpp_headers/scheduler.o 00:03:35.142 CXX test/cpp_headers/scsi.o 00:03:35.142 CXX test/cpp_headers/scsi_spec.o 00:03:35.402 CXX test/cpp_headers/sock.o 00:03:35.402 CC examples/nvmf/nvmf/nvmf.o 00:03:35.402 CXX test/cpp_headers/stdinc.o 00:03:35.402 CXX test/cpp_headers/string.o 00:03:35.402 CXX test/cpp_headers/thread.o 00:03:35.402 CXX test/cpp_headers/trace.o 00:03:35.402 CC test/bdev/bdevio/bdevio.o 00:03:35.402 CXX test/cpp_headers/trace_parser.o 00:03:35.402 CXX test/cpp_headers/tree.o 00:03:35.402 CXX test/cpp_headers/ublk.o 00:03:35.402 CXX test/cpp_headers/util.o 00:03:35.402 CXX test/cpp_headers/uuid.o 00:03:35.402 CXX test/cpp_headers/version.o 00:03:35.402 CXX test/cpp_headers/vfio_user_pci.o 00:03:35.665 CXX test/cpp_headers/vfio_user_spec.o 00:03:35.665 CXX test/cpp_headers/vhost.o 00:03:35.665 CXX test/cpp_headers/vmd.o 00:03:35.665 CXX test/cpp_headers/xor.o 00:03:35.665 LINK nvmf 00:03:35.665 CXX test/cpp_headers/zipf.o 00:03:35.928 LINK bdevio 00:03:40.120 LINK esnap 00:03:40.379 00:03:40.379 real 1m27.368s 00:03:40.379 user 7m18.797s 00:03:40.379 sys 2m2.072s 00:03:40.379 08:03:45 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:40.379 08:03:45 make -- common/autotest_common.sh@10 -- $ set +x 00:03:40.379 ************************************ 00:03:40.379 END TEST make 00:03:40.379 ************************************ 00:03:40.379 08:03:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:40.379 08:03:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:40.379 08:03:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:40.379 08:03:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.379 08:03:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:40.379 08:03:45 -- pm/common@44 -- $ pid=5265 00:03:40.379 08:03:45 -- pm/common@50 -- $ kill -TERM 5265 00:03:40.379 08:03:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.379 08:03:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:40.379 08:03:45 -- pm/common@44 -- $ pid=5267 00:03:40.379 08:03:45 -- pm/common@50 -- $ kill -TERM 5267 00:03:40.379 08:03:45 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:40.379 08:03:45 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:40.639 08:03:45 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:40.639 08:03:45 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:40.639 08:03:45 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:40.639 08:03:45 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:40.639 08:03:45 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.639 08:03:45 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.639 08:03:45 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.639 08:03:45 -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.639 08:03:45 -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.639 08:03:45 -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.639 08:03:45 -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.639 08:03:45 -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.639 08:03:45 -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.639 08:03:45 -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.639 08:03:45 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.639 08:03:45 -- scripts/common.sh@344 -- # case "$op" in 00:03:40.639 08:03:45 -- scripts/common.sh@345 -- # : 1 00:03:40.639 08:03:45 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.639 08:03:45 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.639 08:03:45 -- scripts/common.sh@365 -- # decimal 1 00:03:40.639 08:03:45 -- scripts/common.sh@353 -- # local d=1 00:03:40.639 08:03:45 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.639 08:03:45 -- scripts/common.sh@355 -- # echo 1 00:03:40.639 08:03:45 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.639 08:03:45 -- scripts/common.sh@366 -- # decimal 2 00:03:40.639 08:03:45 -- scripts/common.sh@353 -- # local d=2 00:03:40.639 08:03:45 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.639 08:03:45 -- scripts/common.sh@355 -- # echo 2 00:03:40.639 08:03:45 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.639 08:03:45 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.639 08:03:45 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.639 08:03:45 -- scripts/common.sh@368 -- # return 0 00:03:40.639 08:03:45 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.639 08:03:45 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:40.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.639 --rc genhtml_branch_coverage=1 00:03:40.639 --rc genhtml_function_coverage=1 00:03:40.639 --rc genhtml_legend=1 00:03:40.639 --rc geninfo_all_blocks=1 00:03:40.639 --rc geninfo_unexecuted_blocks=1 00:03:40.639 00:03:40.639 ' 00:03:40.639 08:03:45 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:40.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.639 --rc genhtml_branch_coverage=1 00:03:40.639 --rc genhtml_function_coverage=1 00:03:40.639 --rc genhtml_legend=1 00:03:40.639 --rc geninfo_all_blocks=1 00:03:40.639 --rc geninfo_unexecuted_blocks=1 00:03:40.639 00:03:40.639 ' 00:03:40.639 08:03:45 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:40.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.639 --rc genhtml_branch_coverage=1 00:03:40.639 --rc genhtml_function_coverage=1 00:03:40.639 --rc genhtml_legend=1 00:03:40.639 --rc geninfo_all_blocks=1 00:03:40.639 --rc geninfo_unexecuted_blocks=1 00:03:40.639 00:03:40.639 ' 00:03:40.639 08:03:45 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:40.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.639 --rc genhtml_branch_coverage=1 00:03:40.639 --rc genhtml_function_coverage=1 00:03:40.639 --rc genhtml_legend=1 00:03:40.639 --rc geninfo_all_blocks=1 00:03:40.639 --rc geninfo_unexecuted_blocks=1 00:03:40.639 00:03:40.639 ' 00:03:40.639 08:03:45 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:40.639 08:03:45 -- nvmf/common.sh@7 -- # uname -s 00:03:40.639 08:03:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:40.639 08:03:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:40.639 08:03:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:40.639 08:03:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:40.639 08:03:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:40.639 08:03:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:40.639 08:03:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:40.639 08:03:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:40.639 08:03:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:40.639 08:03:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:40.639 08:03:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:03:40.639 08:03:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:03:40.639 08:03:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:40.639 08:03:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:40.639 08:03:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:40.639 08:03:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:40.640 08:03:45 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:40.640 08:03:45 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:40.640 08:03:45 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:40.640 08:03:45 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:40.640 08:03:45 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:40.640 08:03:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.640 08:03:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.640 08:03:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.640 08:03:45 -- paths/export.sh@5 -- # export PATH 00:03:40.640 08:03:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.640 08:03:45 -- nvmf/common.sh@51 -- # : 0 00:03:40.640 08:03:45 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:40.640 08:03:45 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:40.640 08:03:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:40.640 08:03:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:40.640 08:03:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:40.640 08:03:45 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:40.640 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:40.640 08:03:45 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:40.640 08:03:45 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:40.640 08:03:45 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:40.640 08:03:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:40.640 08:03:45 -- spdk/autotest.sh@32 -- # uname -s 00:03:40.640 08:03:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:40.640 08:03:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:40.640 08:03:45 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.640 08:03:45 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:40.640 08:03:45 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.640 08:03:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:40.899 08:03:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:40.899 08:03:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:40.899 08:03:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:40.899 08:03:45 -- spdk/autotest.sh@48 -- # udevadm_pid=54926 00:03:40.899 08:03:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:40.899 08:03:45 -- pm/common@17 -- # local monitor 00:03:40.899 08:03:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.899 08:03:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.899 08:03:45 -- pm/common@25 -- # sleep 1 00:03:40.899 08:03:45 -- pm/common@21 -- # date +%s 00:03:40.899 08:03:45 -- pm/common@21 -- # date +%s 00:03:40.900 08:03:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731830625 00:03:40.900 08:03:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731830625 00:03:40.900 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731830625_collect-cpu-load.pm.log 00:03:40.900 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731830625_collect-vmstat.pm.log 00:03:41.839 08:03:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:41.839 08:03:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:41.839 08:03:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:41.839 08:03:46 -- common/autotest_common.sh@10 -- # set +x 00:03:41.839 08:03:46 -- spdk/autotest.sh@59 -- # create_test_list 00:03:41.839 08:03:46 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:41.839 08:03:46 -- common/autotest_common.sh@10 -- # set +x 00:03:41.839 08:03:46 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:41.839 08:03:46 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:41.839 08:03:46 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:41.839 08:03:46 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:41.839 08:03:46 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:41.839 08:03:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:41.839 08:03:46 -- common/autotest_common.sh@1457 -- # uname 00:03:41.839 08:03:46 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:41.839 08:03:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:41.839 08:03:46 -- common/autotest_common.sh@1477 -- # uname 00:03:41.839 08:03:46 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:41.839 08:03:46 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:41.839 08:03:46 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:41.839 lcov: LCOV version 1.15 00:03:41.839 08:03:46 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:56.726 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:56.726 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:14.818 08:04:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:14.818 08:04:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.818 08:04:17 -- common/autotest_common.sh@10 -- # set +x 00:04:14.818 08:04:17 -- spdk/autotest.sh@78 -- # rm -f 00:04:14.818 08:04:17 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.818 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:14.818 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:14.818 08:04:18 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:14.818 08:04:18 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:14.818 08:04:18 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:14.818 08:04:18 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:14.818 08:04:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:14.818 08:04:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:14.818 08:04:18 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:14.818 08:04:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.818 08:04:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:14.818 08:04:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:14.818 08:04:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:14.818 08:04:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:14.818 08:04:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:14.818 08:04:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:14.818 08:04:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:14.818 08:04:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:14.818 08:04:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:14.818 08:04:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:14.818 08:04:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:14.818 08:04:18 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:14.818 08:04:18 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:14.818 08:04:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:14.818 08:04:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:14.818 08:04:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:14.818 08:04:18 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:14.818 08:04:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.818 08:04:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.818 08:04:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:14.818 08:04:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:14.818 08:04:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:14.818 No valid GPT data, bailing 00:04:14.818 08:04:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:14.818 08:04:18 -- scripts/common.sh@394 -- # pt= 00:04:14.818 08:04:18 -- scripts/common.sh@395 -- # return 1 00:04:14.818 08:04:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:14.818 1+0 records in 00:04:14.818 1+0 records out 00:04:14.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684529 s, 153 MB/s 00:04:14.818 08:04:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.818 08:04:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.818 08:04:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:14.818 08:04:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:14.818 08:04:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:14.818 No valid GPT data, bailing 00:04:14.818 08:04:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:14.818 08:04:18 -- scripts/common.sh@394 -- # pt= 00:04:14.818 08:04:18 -- scripts/common.sh@395 -- # return 1 00:04:14.818 08:04:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:14.818 1+0 records in 00:04:14.818 1+0 records out 00:04:14.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00690606 s, 152 MB/s 00:04:14.818 08:04:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.818 08:04:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.818 08:04:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:14.818 08:04:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:14.818 08:04:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:14.818 No valid GPT data, bailing 00:04:14.818 08:04:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:14.818 08:04:18 -- scripts/common.sh@394 -- # pt= 00:04:14.818 08:04:18 -- scripts/common.sh@395 -- # return 1 00:04:14.818 08:04:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:14.818 1+0 records in 00:04:14.818 1+0 records out 00:04:14.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051366 s, 204 MB/s 00:04:14.818 08:04:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.818 08:04:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.818 08:04:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:14.818 08:04:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:14.818 08:04:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:14.818 No valid GPT data, bailing 00:04:14.818 08:04:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:14.818 08:04:18 -- scripts/common.sh@394 -- # pt= 00:04:14.818 08:04:18 -- scripts/common.sh@395 -- # return 1 00:04:14.818 08:04:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:14.818 1+0 records in 00:04:14.818 1+0 records out 00:04:14.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656137 s, 160 MB/s 00:04:14.818 08:04:18 -- spdk/autotest.sh@105 -- # sync 00:04:14.818 08:04:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:14.818 08:04:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:14.818 08:04:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:17.379 08:04:22 -- spdk/autotest.sh@111 -- # uname -s 00:04:17.379 08:04:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:17.379 08:04:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:17.379 08:04:22 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:17.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.948 Hugepages 00:04:17.948 node hugesize free / total 00:04:17.948 node0 1048576kB 0 / 0 00:04:17.948 node0 2048kB 0 / 0 00:04:17.948 00:04:17.948 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.207 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:18.207 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:18.465 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:18.465 08:04:23 -- spdk/autotest.sh@117 -- # uname -s 00:04:18.465 08:04:23 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:18.465 08:04:23 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:18.465 08:04:23 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.402 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.402 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.402 08:04:24 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:20.781 08:04:25 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:20.781 08:04:25 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:20.781 08:04:25 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.781 08:04:25 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:20.781 08:04:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:20.781 08:04:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:20.781 08:04:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.781 08:04:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:20.781 08:04:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:20.781 08:04:25 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:20.781 08:04:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:20.781 08:04:25 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.301 Waiting for block devices as requested 00:04:21.301 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.301 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.560 08:04:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.560 08:04:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:21.560 08:04:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:21.560 08:04:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:21.560 08:04:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.560 08:04:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:21.560 08:04:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.560 08:04:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:21.560 08:04:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:21.560 08:04:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:21.560 08:04:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:21.560 08:04:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.560 08:04:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.560 08:04:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.560 08:04:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.560 08:04:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.560 08:04:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:21.560 08:04:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.560 08:04:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.560 08:04:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.560 08:04:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.560 08:04:26 -- common/autotest_common.sh@1543 -- # continue 00:04:21.560 08:04:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.560 08:04:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:21.560 08:04:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:21.560 08:04:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:21.560 08:04:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.560 08:04:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:21.560 08:04:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.560 08:04:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:21.560 08:04:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:21.560 08:04:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:21.560 08:04:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:21.560 08:04:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.560 08:04:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.560 08:04:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:21.560 08:04:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.560 08:04:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.560 08:04:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.560 08:04:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:21.560 08:04:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.560 08:04:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.560 08:04:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.560 08:04:26 -- common/autotest_common.sh@1543 -- # continue 00:04:21.560 08:04:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:21.560 08:04:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.560 08:04:26 -- common/autotest_common.sh@10 -- # set +x 00:04:21.560 08:04:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:21.560 08:04:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.560 08:04:26 -- common/autotest_common.sh@10 -- # set +x 00:04:21.560 08:04:26 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.500 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.500 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.760 08:04:27 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:22.760 08:04:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.760 08:04:27 -- common/autotest_common.sh@10 -- # set +x 00:04:22.760 08:04:27 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:22.760 08:04:27 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:22.760 08:04:27 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.760 08:04:27 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:22.760 08:04:27 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:22.760 08:04:27 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:22.760 08:04:27 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:22.760 08:04:27 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:22.760 08:04:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:22.760 08:04:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:22.760 08:04:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.760 08:04:27 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.760 08:04:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:22.760 08:04:27 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:22.760 08:04:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:22.760 08:04:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.760 08:04:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:22.760 08:04:27 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:22.760 08:04:27 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.760 08:04:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.760 08:04:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:22.760 08:04:27 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:22.760 08:04:27 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.760 08:04:27 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:22.760 08:04:27 -- common/autotest_common.sh@1572 -- # return 0 00:04:22.760 08:04:27 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:22.760 08:04:27 -- common/autotest_common.sh@1580 -- # return 0 00:04:22.760 08:04:27 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:22.760 08:04:27 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:22.760 08:04:27 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.760 08:04:27 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.760 08:04:27 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:22.760 08:04:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.760 08:04:27 -- common/autotest_common.sh@10 -- # set +x 00:04:22.760 08:04:27 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:22.760 08:04:27 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:22.760 08:04:27 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:22.760 08:04:27 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.760 08:04:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.760 08:04:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.760 08:04:27 -- common/autotest_common.sh@10 -- # set +x 00:04:22.760 ************************************ 00:04:22.760 START TEST env 00:04:22.760 ************************************ 00:04:22.760 08:04:27 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:23.019 * Looking for test storage... 00:04:23.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:23.019 08:04:27 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.019 08:04:27 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.019 08:04:27 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.019 08:04:27 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.019 08:04:27 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.019 08:04:27 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.019 08:04:27 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.019 08:04:27 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.019 08:04:27 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.019 08:04:27 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.019 08:04:27 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.019 08:04:27 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.019 08:04:27 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.019 08:04:27 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.019 08:04:27 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.019 08:04:27 env -- scripts/common.sh@344 -- # case "$op" in 00:04:23.019 08:04:27 env -- scripts/common.sh@345 -- # : 1 00:04:23.019 08:04:27 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.019 08:04:27 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.019 08:04:27 env -- scripts/common.sh@365 -- # decimal 1 00:04:23.019 08:04:27 env -- scripts/common.sh@353 -- # local d=1 00:04:23.019 08:04:27 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.019 08:04:27 env -- scripts/common.sh@355 -- # echo 1 00:04:23.019 08:04:27 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.019 08:04:27 env -- scripts/common.sh@366 -- # decimal 2 00:04:23.019 08:04:27 env -- scripts/common.sh@353 -- # local d=2 00:04:23.019 08:04:27 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.019 08:04:27 env -- scripts/common.sh@355 -- # echo 2 00:04:23.019 08:04:27 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.019 08:04:27 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.019 08:04:27 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.019 08:04:27 env -- scripts/common.sh@368 -- # return 0 00:04:23.019 08:04:27 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.019 08:04:27 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.019 --rc genhtml_branch_coverage=1 00:04:23.019 --rc genhtml_function_coverage=1 00:04:23.019 --rc genhtml_legend=1 00:04:23.019 --rc geninfo_all_blocks=1 00:04:23.019 --rc geninfo_unexecuted_blocks=1 00:04:23.019 00:04:23.019 ' 00:04:23.019 08:04:27 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.019 --rc genhtml_branch_coverage=1 00:04:23.019 --rc genhtml_function_coverage=1 00:04:23.019 --rc genhtml_legend=1 00:04:23.019 --rc geninfo_all_blocks=1 00:04:23.019 --rc geninfo_unexecuted_blocks=1 00:04:23.019 00:04:23.020 ' 00:04:23.020 08:04:27 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.020 --rc genhtml_branch_coverage=1 00:04:23.020 --rc genhtml_function_coverage=1 00:04:23.020 --rc genhtml_legend=1 00:04:23.020 --rc geninfo_all_blocks=1 00:04:23.020 --rc geninfo_unexecuted_blocks=1 00:04:23.020 00:04:23.020 ' 00:04:23.020 08:04:27 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.020 --rc genhtml_branch_coverage=1 00:04:23.020 --rc genhtml_function_coverage=1 00:04:23.020 --rc genhtml_legend=1 00:04:23.020 --rc geninfo_all_blocks=1 00:04:23.020 --rc geninfo_unexecuted_blocks=1 00:04:23.020 00:04:23.020 ' 00:04:23.020 08:04:27 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:23.020 08:04:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.020 08:04:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.020 08:04:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.020 ************************************ 00:04:23.020 START TEST env_memory 00:04:23.020 ************************************ 00:04:23.020 08:04:27 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:23.279 00:04:23.279 00:04:23.279 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.279 http://cunit.sourceforge.net/ 00:04:23.279 00:04:23.279 00:04:23.279 Suite: memory 00:04:23.279 Test: alloc and free memory map ...[2024-11-17 08:04:28.071222] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:23.279 passed 00:04:23.279 Test: mem map translation ...[2024-11-17 08:04:28.115558] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:23.279 [2024-11-17 08:04:28.115604] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:23.279 [2024-11-17 08:04:28.115671] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:23.279 [2024-11-17 08:04:28.115694] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:23.279 passed 00:04:23.279 Test: mem map registration ...[2024-11-17 08:04:28.182716] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:23.279 [2024-11-17 08:04:28.182762] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:23.279 passed 00:04:23.279 Test: mem map adjacent registrations ...passed 00:04:23.279 00:04:23.279 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.279 suites 1 1 n/a 0 0 00:04:23.279 tests 4 4 4 0 0 00:04:23.279 asserts 152 152 152 0 n/a 00:04:23.279 00:04:23.279 Elapsed time = 0.239 seconds 00:04:23.539 ************************************ 00:04:23.539 END TEST env_memory 00:04:23.539 ************************************ 00:04:23.539 00:04:23.539 real 0m0.295s 00:04:23.539 user 0m0.255s 00:04:23.539 sys 0m0.030s 00:04:23.539 08:04:28 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.539 08:04:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.539 08:04:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.539 08:04:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.539 08:04:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.539 08:04:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.539 ************************************ 00:04:23.539 START TEST env_vtophys 00:04:23.539 ************************************ 00:04:23.539 08:04:28 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.539 EAL: lib.eal log level changed from notice to debug 00:04:23.539 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.539 EAL: Detected lcore 1 as core 0 on socket 0 00:04:23.539 EAL: Detected lcore 2 as core 0 on socket 0 00:04:23.539 EAL: Detected lcore 3 as core 0 on socket 0 00:04:23.539 EAL: Detected lcore 4 as core 0 on socket 0 00:04:23.539 EAL: Detected lcore 5 as core 0 on socket 0 00:04:23.539 EAL: Detected lcore 6 as core 0 on socket 0 00:04:23.539 EAL: Detected lcore 7 as core 0 on socket 0 00:04:23.539 EAL: Detected lcore 8 as core 0 on socket 0 00:04:23.539 EAL: Detected lcore 9 as core 0 on socket 0 00:04:23.539 EAL: Maximum logical cores by configuration: 128 00:04:23.539 EAL: Detected CPU lcores: 10 00:04:23.539 EAL: Detected NUMA nodes: 1 00:04:23.539 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:23.539 EAL: Detected shared linkage of DPDK 00:04:23.539 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.539 EAL: Selected IOVA mode 'PA' 00:04:23.539 EAL: Probing VFIO support... 00:04:23.539 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.539 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:23.539 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.539 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.539 EAL: Setting up physically contiguous memory... 00:04:23.539 EAL: Setting maximum number of open files to 524288 00:04:23.539 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.539 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.539 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.539 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.539 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.539 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.539 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.539 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.539 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.539 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.539 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.539 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.539 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.539 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.539 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.539 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.539 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.539 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.539 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.539 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.539 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.539 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.539 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.539 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.539 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.539 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.539 EAL: Hugepages will be freed exactly as allocated. 00:04:23.539 EAL: No shared files mode enabled, IPC is disabled 00:04:23.539 EAL: No shared files mode enabled, IPC is disabled 00:04:23.799 EAL: TSC frequency is ~2490000 KHz 00:04:23.799 EAL: Main lcore 0 is ready (tid=7f05be055a40;cpuset=[0]) 00:04:23.799 EAL: Trying to obtain current memory policy. 00:04:23.799 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.799 EAL: Restoring previous memory policy: 0 00:04:23.799 EAL: request: mp_malloc_sync 00:04:23.799 EAL: No shared files mode enabled, IPC is disabled 00:04:23.799 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.799 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.799 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:23.799 EAL: Mem event callback 'spdk:(nil)' registered 00:04:23.799 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:23.799 00:04:23.799 00:04:23.799 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.799 http://cunit.sourceforge.net/ 00:04:23.799 00:04:23.799 00:04:23.799 Suite: components_suite 00:04:24.368 Test: vtophys_malloc_test ...passed 00:04:24.368 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:24.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.368 EAL: Restoring previous memory policy: 4 00:04:24.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.368 EAL: request: mp_malloc_sync 00:04:24.368 EAL: No shared files mode enabled, IPC is disabled 00:04:24.368 EAL: Heap on socket 0 was expanded by 4MB 00:04:24.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.368 EAL: request: mp_malloc_sync 00:04:24.368 EAL: No shared files mode enabled, IPC is disabled 00:04:24.368 EAL: Heap on socket 0 was shrunk by 4MB 00:04:24.368 EAL: Trying to obtain current memory policy. 00:04:24.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.368 EAL: Restoring previous memory policy: 4 00:04:24.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.368 EAL: request: mp_malloc_sync 00:04:24.368 EAL: No shared files mode enabled, IPC is disabled 00:04:24.368 EAL: Heap on socket 0 was expanded by 6MB 00:04:24.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.368 EAL: request: mp_malloc_sync 00:04:24.368 EAL: No shared files mode enabled, IPC is disabled 00:04:24.368 EAL: Heap on socket 0 was shrunk by 6MB 00:04:24.368 EAL: Trying to obtain current memory policy. 00:04:24.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.368 EAL: Restoring previous memory policy: 4 00:04:24.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.368 EAL: request: mp_malloc_sync 00:04:24.368 EAL: No shared files mode enabled, IPC is disabled 00:04:24.368 EAL: Heap on socket 0 was expanded by 10MB 00:04:24.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.368 EAL: request: mp_malloc_sync 00:04:24.368 EAL: No shared files mode enabled, IPC is disabled 00:04:24.368 EAL: Heap on socket 0 was shrunk by 10MB 00:04:24.368 EAL: Trying to obtain current memory policy. 00:04:24.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.368 EAL: Restoring previous memory policy: 4 00:04:24.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.368 EAL: request: mp_malloc_sync 00:04:24.368 EAL: No shared files mode enabled, IPC is disabled 00:04:24.368 EAL: Heap on socket 0 was expanded by 18MB 00:04:24.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.368 EAL: request: mp_malloc_sync 00:04:24.368 EAL: No shared files mode enabled, IPC is disabled 00:04:24.368 EAL: Heap on socket 0 was shrunk by 18MB 00:04:24.368 EAL: Trying to obtain current memory policy. 00:04:24.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.368 EAL: Restoring previous memory policy: 4 00:04:24.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.368 EAL: request: mp_malloc_sync 00:04:24.368 EAL: No shared files mode enabled, IPC is disabled 00:04:24.368 EAL: Heap on socket 0 was expanded by 34MB 00:04:24.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.368 EAL: request: mp_malloc_sync 00:04:24.368 EAL: No shared files mode enabled, IPC is disabled 00:04:24.368 EAL: Heap on socket 0 was shrunk by 34MB 00:04:24.626 EAL: Trying to obtain current memory policy. 00:04:24.626 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.626 EAL: Restoring previous memory policy: 4 00:04:24.626 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.626 EAL: request: mp_malloc_sync 00:04:24.626 EAL: No shared files mode enabled, IPC is disabled 00:04:24.626 EAL: Heap on socket 0 was expanded by 66MB 00:04:24.626 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.626 EAL: request: mp_malloc_sync 00:04:24.626 EAL: No shared files mode enabled, IPC is disabled 00:04:24.626 EAL: Heap on socket 0 was shrunk by 66MB 00:04:24.884 EAL: Trying to obtain current memory policy. 00:04:24.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.884 EAL: Restoring previous memory policy: 4 00:04:24.884 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.884 EAL: request: mp_malloc_sync 00:04:24.884 EAL: No shared files mode enabled, IPC is disabled 00:04:24.884 EAL: Heap on socket 0 was expanded by 130MB 00:04:25.143 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.143 EAL: request: mp_malloc_sync 00:04:25.143 EAL: No shared files mode enabled, IPC is disabled 00:04:25.143 EAL: Heap on socket 0 was shrunk by 130MB 00:04:25.401 EAL: Trying to obtain current memory policy. 00:04:25.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.401 EAL: Restoring previous memory policy: 4 00:04:25.401 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.401 EAL: request: mp_malloc_sync 00:04:25.401 EAL: No shared files mode enabled, IPC is disabled 00:04:25.401 EAL: Heap on socket 0 was expanded by 258MB 00:04:25.968 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.968 EAL: request: mp_malloc_sync 00:04:25.968 EAL: No shared files mode enabled, IPC is disabled 00:04:25.968 EAL: Heap on socket 0 was shrunk by 258MB 00:04:26.537 EAL: Trying to obtain current memory policy. 00:04:26.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.537 EAL: Restoring previous memory policy: 4 00:04:26.537 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.537 EAL: request: mp_malloc_sync 00:04:26.537 EAL: No shared files mode enabled, IPC is disabled 00:04:26.537 EAL: Heap on socket 0 was expanded by 514MB 00:04:27.473 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.732 EAL: request: mp_malloc_sync 00:04:27.732 EAL: No shared files mode enabled, IPC is disabled 00:04:27.732 EAL: Heap on socket 0 was shrunk by 514MB 00:04:28.670 EAL: Trying to obtain current memory policy. 00:04:28.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.928 EAL: Restoring previous memory policy: 4 00:04:28.928 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.928 EAL: request: mp_malloc_sync 00:04:28.928 EAL: No shared files mode enabled, IPC is disabled 00:04:28.929 EAL: Heap on socket 0 was expanded by 1026MB 00:04:30.833 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.092 EAL: request: mp_malloc_sync 00:04:31.092 EAL: No shared files mode enabled, IPC is disabled 00:04:31.092 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:33.074 passed 00:04:33.074 00:04:33.074 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.074 suites 1 1 n/a 0 0 00:04:33.074 tests 2 2 2 0 0 00:04:33.074 asserts 5614 5614 5614 0 n/a 00:04:33.074 00:04:33.074 Elapsed time = 9.020 seconds 00:04:33.074 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.074 EAL: request: mp_malloc_sync 00:04:33.074 EAL: No shared files mode enabled, IPC is disabled 00:04:33.074 EAL: Heap on socket 0 was shrunk by 2MB 00:04:33.074 EAL: No shared files mode enabled, IPC is disabled 00:04:33.074 EAL: No shared files mode enabled, IPC is disabled 00:04:33.074 EAL: No shared files mode enabled, IPC is disabled 00:04:33.074 00:04:33.074 real 0m9.374s 00:04:33.074 user 0m7.921s 00:04:33.074 sys 0m1.281s 00:04:33.074 08:04:37 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.074 08:04:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:33.074 ************************************ 00:04:33.074 END TEST env_vtophys 00:04:33.074 ************************************ 00:04:33.074 08:04:37 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:33.074 08:04:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.074 08:04:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.074 08:04:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.074 ************************************ 00:04:33.074 START TEST env_pci 00:04:33.074 ************************************ 00:04:33.074 08:04:37 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:33.074 00:04:33.074 00:04:33.074 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.074 http://cunit.sourceforge.net/ 00:04:33.074 00:04:33.074 00:04:33.074 Suite: pci 00:04:33.075 Test: pci_hook ...[2024-11-17 08:04:37.867921] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57302 has claimed it 00:04:33.075 passed 00:04:33.075 00:04:33.075 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.075 suites 1 1 n/a 0 0 00:04:33.075 tests 1 1 1 0 0 00:04:33.075 asserts 25 25 25 0 n/a 00:04:33.075 00:04:33.075 Elapsed time = 0.008 seconds 00:04:33.075 EAL: Cannot find device (10000:00:01.0) 00:04:33.075 EAL: Failed to attach device on primary process 00:04:33.075 00:04:33.075 real 0m0.114s 00:04:33.075 user 0m0.042s 00:04:33.075 sys 0m0.071s 00:04:33.075 08:04:37 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.075 08:04:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:33.075 ************************************ 00:04:33.075 END TEST env_pci 00:04:33.075 ************************************ 00:04:33.075 08:04:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:33.075 08:04:37 env -- env/env.sh@15 -- # uname 00:04:33.075 08:04:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:33.075 08:04:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:33.075 08:04:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.075 08:04:38 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:33.075 08:04:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.075 08:04:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.075 ************************************ 00:04:33.075 START TEST env_dpdk_post_init 00:04:33.075 ************************************ 00:04:33.075 08:04:38 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.336 EAL: Detected CPU lcores: 10 00:04:33.336 EAL: Detected NUMA nodes: 1 00:04:33.337 EAL: Detected shared linkage of DPDK 00:04:33.337 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.337 EAL: Selected IOVA mode 'PA' 00:04:33.337 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.337 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:33.337 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:33.337 Starting DPDK initialization... 00:04:33.337 Starting SPDK post initialization... 00:04:33.337 SPDK NVMe probe 00:04:33.337 Attaching to 0000:00:10.0 00:04:33.337 Attaching to 0000:00:11.0 00:04:33.337 Attached to 0000:00:10.0 00:04:33.337 Attached to 0000:00:11.0 00:04:33.337 Cleaning up... 00:04:33.337 00:04:33.337 real 0m0.311s 00:04:33.337 user 0m0.094s 00:04:33.337 sys 0m0.115s 00:04:33.337 08:04:38 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.337 08:04:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:33.337 ************************************ 00:04:33.337 END TEST env_dpdk_post_init 00:04:33.337 ************************************ 00:04:33.598 08:04:38 env -- env/env.sh@26 -- # uname 00:04:33.598 08:04:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:33.598 08:04:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.598 08:04:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.598 08:04:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.598 08:04:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.598 ************************************ 00:04:33.598 START TEST env_mem_callbacks 00:04:33.598 ************************************ 00:04:33.598 08:04:38 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.598 EAL: Detected CPU lcores: 10 00:04:33.598 EAL: Detected NUMA nodes: 1 00:04:33.598 EAL: Detected shared linkage of DPDK 00:04:33.598 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.598 EAL: Selected IOVA mode 'PA' 00:04:33.598 00:04:33.598 00:04:33.598 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.598 http://cunit.sourceforge.net/ 00:04:33.598 00:04:33.598 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.598 00:04:33.598 Suite: memory 00:04:33.598 Test: test ... 00:04:33.598 register 0x200000200000 2097152 00:04:33.598 malloc 3145728 00:04:33.858 register 0x200000400000 4194304 00:04:33.858 buf 0x2000004fffc0 len 3145728 PASSED 00:04:33.858 malloc 64 00:04:33.858 buf 0x2000004ffec0 len 64 PASSED 00:04:33.858 malloc 4194304 00:04:33.858 register 0x200000800000 6291456 00:04:33.858 buf 0x2000009fffc0 len 4194304 PASSED 00:04:33.858 free 0x2000004fffc0 3145728 00:04:33.858 free 0x2000004ffec0 64 00:04:33.858 unregister 0x200000400000 4194304 PASSED 00:04:33.858 free 0x2000009fffc0 4194304 00:04:33.858 unregister 0x200000800000 6291456 PASSED 00:04:33.858 malloc 8388608 00:04:33.858 register 0x200000400000 10485760 00:04:33.858 buf 0x2000005fffc0 len 8388608 PASSED 00:04:33.858 free 0x2000005fffc0 8388608 00:04:33.858 unregister 0x200000400000 10485760 PASSED 00:04:33.858 passed 00:04:33.858 00:04:33.858 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.858 suites 1 1 n/a 0 0 00:04:33.858 tests 1 1 1 0 0 00:04:33.858 asserts 15 15 15 0 n/a 00:04:33.858 00:04:33.858 Elapsed time = 0.080 seconds 00:04:33.858 00:04:33.858 real 0m0.291s 00:04:33.858 user 0m0.108s 00:04:33.858 sys 0m0.080s 00:04:33.858 ************************************ 00:04:33.858 END TEST env_mem_callbacks 00:04:33.858 ************************************ 00:04:33.858 08:04:38 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.858 08:04:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:33.858 ************************************ 00:04:33.858 END TEST env 00:04:33.858 ************************************ 00:04:33.858 00:04:33.858 real 0m11.006s 00:04:33.858 user 0m8.669s 00:04:33.858 sys 0m1.944s 00:04:33.858 08:04:38 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.858 08:04:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.858 08:04:38 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:33.858 08:04:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.858 08:04:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.858 08:04:38 -- common/autotest_common.sh@10 -- # set +x 00:04:33.858 ************************************ 00:04:33.858 START TEST rpc 00:04:33.858 ************************************ 00:04:33.858 08:04:38 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:34.118 * Looking for test storage... 00:04:34.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.118 08:04:38 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:34.118 08:04:38 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:34.118 08:04:38 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:34.118 08:04:39 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:34.118 08:04:39 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.118 08:04:39 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.118 08:04:39 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.118 08:04:39 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.118 08:04:39 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.118 08:04:39 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.118 08:04:39 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.118 08:04:39 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.118 08:04:39 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.118 08:04:39 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.118 08:04:39 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.118 08:04:39 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.118 08:04:39 rpc -- scripts/common.sh@345 -- # : 1 00:04:34.118 08:04:39 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.118 08:04:39 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.118 08:04:39 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.118 08:04:39 rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.118 08:04:39 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.118 08:04:39 rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.118 08:04:39 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.118 08:04:39 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.118 08:04:39 rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.118 08:04:39 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.118 08:04:39 rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.118 08:04:39 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.118 08:04:39 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.118 08:04:39 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.118 08:04:39 rpc -- scripts/common.sh@368 -- # return 0 00:04:34.118 08:04:39 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.118 08:04:39 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:34.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.118 --rc genhtml_branch_coverage=1 00:04:34.118 --rc genhtml_function_coverage=1 00:04:34.118 --rc genhtml_legend=1 00:04:34.118 --rc geninfo_all_blocks=1 00:04:34.118 --rc geninfo_unexecuted_blocks=1 00:04:34.118 00:04:34.118 ' 00:04:34.118 08:04:39 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:34.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.118 --rc genhtml_branch_coverage=1 00:04:34.118 --rc genhtml_function_coverage=1 00:04:34.118 --rc genhtml_legend=1 00:04:34.118 --rc geninfo_all_blocks=1 00:04:34.118 --rc geninfo_unexecuted_blocks=1 00:04:34.118 00:04:34.118 ' 00:04:34.119 08:04:39 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:34.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.119 --rc genhtml_branch_coverage=1 00:04:34.119 --rc genhtml_function_coverage=1 00:04:34.119 --rc genhtml_legend=1 00:04:34.119 --rc geninfo_all_blocks=1 00:04:34.119 --rc geninfo_unexecuted_blocks=1 00:04:34.119 00:04:34.119 ' 00:04:34.119 08:04:39 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:34.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.119 --rc genhtml_branch_coverage=1 00:04:34.119 --rc genhtml_function_coverage=1 00:04:34.119 --rc genhtml_legend=1 00:04:34.119 --rc geninfo_all_blocks=1 00:04:34.119 --rc geninfo_unexecuted_blocks=1 00:04:34.119 00:04:34.119 ' 00:04:34.119 08:04:39 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:34.119 08:04:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57434 00:04:34.119 08:04:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.119 08:04:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57434 00:04:34.119 08:04:39 rpc -- common/autotest_common.sh@835 -- # '[' -z 57434 ']' 00:04:34.119 08:04:39 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.119 08:04:39 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.119 08:04:39 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.119 08:04:39 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.119 08:04:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.378 [2024-11-17 08:04:39.182915] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:34.378 [2024-11-17 08:04:39.183296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57434 ] 00:04:34.378 [2024-11-17 08:04:39.370514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.638 [2024-11-17 08:04:39.504788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:34.638 [2024-11-17 08:04:39.504876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57434' to capture a snapshot of events at runtime. 00:04:34.638 [2024-11-17 08:04:39.504890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:34.638 [2024-11-17 08:04:39.504905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:34.638 [2024-11-17 08:04:39.504916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57434 for offline analysis/debug. 00:04:34.638 [2024-11-17 08:04:39.506340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.897 [2024-11-17 08:04:39.803118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:35.836 08:04:40 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.836 08:04:40 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:35.836 08:04:40 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.836 08:04:40 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.836 08:04:40 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:35.836 08:04:40 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:35.836 08:04:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.836 08:04:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.836 08:04:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.836 ************************************ 00:04:35.836 START TEST rpc_integrity 00:04:35.836 ************************************ 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.836 { 00:04:35.836 "name": "Malloc0", 00:04:35.836 "aliases": [ 00:04:35.836 "76fe1aa6-d992-45ce-9bab-a808d89f2a82" 00:04:35.836 ], 00:04:35.836 "product_name": "Malloc disk", 00:04:35.836 "block_size": 512, 00:04:35.836 "num_blocks": 16384, 00:04:35.836 "uuid": "76fe1aa6-d992-45ce-9bab-a808d89f2a82", 00:04:35.836 "assigned_rate_limits": { 00:04:35.836 "rw_ios_per_sec": 0, 00:04:35.836 "rw_mbytes_per_sec": 0, 00:04:35.836 "r_mbytes_per_sec": 0, 00:04:35.836 "w_mbytes_per_sec": 0 00:04:35.836 }, 00:04:35.836 "claimed": false, 00:04:35.836 "zoned": false, 00:04:35.836 "supported_io_types": { 00:04:35.836 "read": true, 00:04:35.836 "write": true, 00:04:35.836 "unmap": true, 00:04:35.836 "flush": true, 00:04:35.836 "reset": true, 00:04:35.836 "nvme_admin": false, 00:04:35.836 "nvme_io": false, 00:04:35.836 "nvme_io_md": false, 00:04:35.836 "write_zeroes": true, 00:04:35.836 "zcopy": true, 00:04:35.836 "get_zone_info": false, 00:04:35.836 "zone_management": false, 00:04:35.836 "zone_append": false, 00:04:35.836 "compare": false, 00:04:35.836 "compare_and_write": false, 00:04:35.836 "abort": true, 00:04:35.836 "seek_hole": false, 00:04:35.836 "seek_data": false, 00:04:35.836 "copy": true, 00:04:35.836 "nvme_iov_md": false 00:04:35.836 }, 00:04:35.836 "memory_domains": [ 00:04:35.836 { 00:04:35.836 "dma_device_id": "system", 00:04:35.836 "dma_device_type": 1 00:04:35.836 }, 00:04:35.836 { 00:04:35.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.836 "dma_device_type": 2 00:04:35.836 } 00:04:35.836 ], 00:04:35.836 "driver_specific": {} 00:04:35.836 } 00:04:35.836 ]' 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.836 [2024-11-17 08:04:40.708916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:35.836 [2024-11-17 08:04:40.708981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.836 [2024-11-17 08:04:40.709017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:04:35.836 [2024-11-17 08:04:40.709030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.836 [2024-11-17 08:04:40.711979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.836 [2024-11-17 08:04:40.712021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.836 Passthru0 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.836 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.836 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.836 { 00:04:35.836 "name": "Malloc0", 00:04:35.836 "aliases": [ 00:04:35.836 "76fe1aa6-d992-45ce-9bab-a808d89f2a82" 00:04:35.836 ], 00:04:35.836 "product_name": "Malloc disk", 00:04:35.836 "block_size": 512, 00:04:35.836 "num_blocks": 16384, 00:04:35.836 "uuid": "76fe1aa6-d992-45ce-9bab-a808d89f2a82", 00:04:35.836 "assigned_rate_limits": { 00:04:35.836 "rw_ios_per_sec": 0, 00:04:35.836 "rw_mbytes_per_sec": 0, 00:04:35.836 "r_mbytes_per_sec": 0, 00:04:35.836 "w_mbytes_per_sec": 0 00:04:35.836 }, 00:04:35.836 "claimed": true, 00:04:35.836 "claim_type": "exclusive_write", 00:04:35.836 "zoned": false, 00:04:35.836 "supported_io_types": { 00:04:35.836 "read": true, 00:04:35.836 "write": true, 00:04:35.836 "unmap": true, 00:04:35.836 "flush": true, 00:04:35.836 "reset": true, 00:04:35.836 "nvme_admin": false, 00:04:35.836 "nvme_io": false, 00:04:35.836 "nvme_io_md": false, 00:04:35.836 "write_zeroes": true, 00:04:35.836 "zcopy": true, 00:04:35.836 "get_zone_info": false, 00:04:35.836 "zone_management": false, 00:04:35.836 "zone_append": false, 00:04:35.836 "compare": false, 00:04:35.836 "compare_and_write": false, 00:04:35.836 "abort": true, 00:04:35.837 "seek_hole": false, 00:04:35.837 "seek_data": false, 00:04:35.837 "copy": true, 00:04:35.837 "nvme_iov_md": false 00:04:35.837 }, 00:04:35.837 "memory_domains": [ 00:04:35.837 { 00:04:35.837 "dma_device_id": "system", 00:04:35.837 "dma_device_type": 1 00:04:35.837 }, 00:04:35.837 { 00:04:35.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.837 "dma_device_type": 2 00:04:35.837 } 00:04:35.837 ], 00:04:35.837 "driver_specific": {} 00:04:35.837 }, 00:04:35.837 { 00:04:35.837 "name": "Passthru0", 00:04:35.837 "aliases": [ 00:04:35.837 "aaaa8cac-4790-599a-aaee-32248a7d42af" 00:04:35.837 ], 00:04:35.837 "product_name": "passthru", 00:04:35.837 "block_size": 512, 00:04:35.837 "num_blocks": 16384, 00:04:35.837 "uuid": "aaaa8cac-4790-599a-aaee-32248a7d42af", 00:04:35.837 "assigned_rate_limits": { 00:04:35.837 "rw_ios_per_sec": 0, 00:04:35.837 "rw_mbytes_per_sec": 0, 00:04:35.837 "r_mbytes_per_sec": 0, 00:04:35.837 "w_mbytes_per_sec": 0 00:04:35.837 }, 00:04:35.837 "claimed": false, 00:04:35.837 "zoned": false, 00:04:35.837 "supported_io_types": { 00:04:35.837 "read": true, 00:04:35.837 "write": true, 00:04:35.837 "unmap": true, 00:04:35.837 "flush": true, 00:04:35.837 "reset": true, 00:04:35.837 "nvme_admin": false, 00:04:35.837 "nvme_io": false, 00:04:35.837 "nvme_io_md": false, 00:04:35.837 "write_zeroes": true, 00:04:35.837 "zcopy": true, 00:04:35.837 "get_zone_info": false, 00:04:35.837 "zone_management": false, 00:04:35.837 "zone_append": false, 00:04:35.837 "compare": false, 00:04:35.837 "compare_and_write": false, 00:04:35.837 "abort": true, 00:04:35.837 "seek_hole": false, 00:04:35.837 "seek_data": false, 00:04:35.837 "copy": true, 00:04:35.837 "nvme_iov_md": false 00:04:35.837 }, 00:04:35.837 "memory_domains": [ 00:04:35.837 { 00:04:35.837 "dma_device_id": "system", 00:04:35.837 "dma_device_type": 1 00:04:35.837 }, 00:04:35.837 { 00:04:35.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.837 "dma_device_type": 2 00:04:35.837 } 00:04:35.837 ], 00:04:35.837 "driver_specific": { 00:04:35.837 "passthru": { 00:04:35.837 "name": "Passthru0", 00:04:35.837 "base_bdev_name": "Malloc0" 00:04:35.837 } 00:04:35.837 } 00:04:35.837 } 00:04:35.837 ]' 00:04:35.837 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.837 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.837 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.837 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.837 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.837 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.837 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:35.837 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.837 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.837 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.837 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.837 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.837 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.837 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.837 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.837 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:36.096 ************************************ 00:04:36.096 END TEST rpc_integrity 00:04:36.096 ************************************ 00:04:36.096 08:04:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.096 00:04:36.096 real 0m0.332s 00:04:36.096 user 0m0.181s 00:04:36.096 sys 0m0.054s 00:04:36.096 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.096 08:04:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.096 08:04:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:36.096 08:04:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.096 08:04:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.096 08:04:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.096 ************************************ 00:04:36.096 START TEST rpc_plugins 00:04:36.096 ************************************ 00:04:36.096 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:36.096 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:36.096 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.096 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.096 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.096 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:36.096 08:04:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:36.097 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.097 08:04:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.097 08:04:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.097 08:04:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:36.097 { 00:04:36.097 "name": "Malloc1", 00:04:36.097 "aliases": [ 00:04:36.097 "625156d9-c5ab-4069-b790-99d2c3037748" 00:04:36.097 ], 00:04:36.097 "product_name": "Malloc disk", 00:04:36.097 "block_size": 4096, 00:04:36.097 "num_blocks": 256, 00:04:36.097 "uuid": "625156d9-c5ab-4069-b790-99d2c3037748", 00:04:36.097 "assigned_rate_limits": { 00:04:36.097 "rw_ios_per_sec": 0, 00:04:36.097 "rw_mbytes_per_sec": 0, 00:04:36.097 "r_mbytes_per_sec": 0, 00:04:36.097 "w_mbytes_per_sec": 0 00:04:36.097 }, 00:04:36.097 "claimed": false, 00:04:36.097 "zoned": false, 00:04:36.097 "supported_io_types": { 00:04:36.097 "read": true, 00:04:36.097 "write": true, 00:04:36.097 "unmap": true, 00:04:36.097 "flush": true, 00:04:36.097 "reset": true, 00:04:36.097 "nvme_admin": false, 00:04:36.097 "nvme_io": false, 00:04:36.097 "nvme_io_md": false, 00:04:36.097 "write_zeroes": true, 00:04:36.097 "zcopy": true, 00:04:36.097 "get_zone_info": false, 00:04:36.097 "zone_management": false, 00:04:36.097 "zone_append": false, 00:04:36.097 "compare": false, 00:04:36.097 "compare_and_write": false, 00:04:36.097 "abort": true, 00:04:36.097 "seek_hole": false, 00:04:36.097 "seek_data": false, 00:04:36.097 "copy": true, 00:04:36.097 "nvme_iov_md": false 00:04:36.097 }, 00:04:36.097 "memory_domains": [ 00:04:36.097 { 00:04:36.097 "dma_device_id": "system", 00:04:36.097 "dma_device_type": 1 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.097 "dma_device_type": 2 00:04:36.097 } 00:04:36.097 ], 00:04:36.097 "driver_specific": {} 00:04:36.097 } 00:04:36.097 ]' 00:04:36.097 08:04:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:36.097 08:04:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:36.097 08:04:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:36.097 08:04:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.097 08:04:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.097 08:04:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.097 08:04:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:36.097 08:04:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.097 08:04:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.097 08:04:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.097 08:04:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:36.097 08:04:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:36.356 08:04:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:36.356 00:04:36.356 real 0m0.167s 00:04:36.356 user 0m0.095s 00:04:36.356 sys 0m0.030s 00:04:36.356 08:04:41 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.356 08:04:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.356 ************************************ 00:04:36.356 END TEST rpc_plugins 00:04:36.356 ************************************ 00:04:36.356 08:04:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:36.356 08:04:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.356 08:04:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.356 08:04:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.356 ************************************ 00:04:36.356 START TEST rpc_trace_cmd_test 00:04:36.356 ************************************ 00:04:36.356 08:04:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:36.356 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:36.356 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:36.356 08:04:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.356 08:04:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:36.356 08:04:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.356 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:36.356 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57434", 00:04:36.356 "tpoint_group_mask": "0x8", 00:04:36.356 "iscsi_conn": { 00:04:36.356 "mask": "0x2", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "scsi": { 00:04:36.356 "mask": "0x4", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "bdev": { 00:04:36.356 "mask": "0x8", 00:04:36.356 "tpoint_mask": "0xffffffffffffffff" 00:04:36.356 }, 00:04:36.356 "nvmf_rdma": { 00:04:36.356 "mask": "0x10", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "nvmf_tcp": { 00:04:36.356 "mask": "0x20", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "ftl": { 00:04:36.356 "mask": "0x40", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "blobfs": { 00:04:36.356 "mask": "0x80", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "dsa": { 00:04:36.356 "mask": "0x200", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "thread": { 00:04:36.356 "mask": "0x400", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "nvme_pcie": { 00:04:36.356 "mask": "0x800", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "iaa": { 00:04:36.356 "mask": "0x1000", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "nvme_tcp": { 00:04:36.356 "mask": "0x2000", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "bdev_nvme": { 00:04:36.356 "mask": "0x4000", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "sock": { 00:04:36.356 "mask": "0x8000", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "blob": { 00:04:36.356 "mask": "0x10000", 00:04:36.356 "tpoint_mask": "0x0" 00:04:36.356 }, 00:04:36.356 "bdev_raid": { 00:04:36.357 "mask": "0x20000", 00:04:36.357 "tpoint_mask": "0x0" 00:04:36.357 }, 00:04:36.357 "scheduler": { 00:04:36.357 "mask": "0x40000", 00:04:36.357 "tpoint_mask": "0x0" 00:04:36.357 } 00:04:36.357 }' 00:04:36.357 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:36.357 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:36.357 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:36.357 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:36.357 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:36.357 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:36.357 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:36.616 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:36.616 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:36.616 ************************************ 00:04:36.616 END TEST rpc_trace_cmd_test 00:04:36.616 ************************************ 00:04:36.616 08:04:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:36.616 00:04:36.616 real 0m0.241s 00:04:36.616 user 0m0.190s 00:04:36.616 sys 0m0.042s 00:04:36.616 08:04:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.616 08:04:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:36.616 08:04:41 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:36.616 08:04:41 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:36.616 08:04:41 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:36.616 08:04:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.616 08:04:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.616 08:04:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.616 ************************************ 00:04:36.616 START TEST rpc_daemon_integrity 00:04:36.616 ************************************ 00:04:36.616 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:36.616 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.616 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.616 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.616 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.617 { 00:04:36.617 "name": "Malloc2", 00:04:36.617 "aliases": [ 00:04:36.617 "c200dfe8-61a3-4359-b32f-a2ee35ee3ece" 00:04:36.617 ], 00:04:36.617 "product_name": "Malloc disk", 00:04:36.617 "block_size": 512, 00:04:36.617 "num_blocks": 16384, 00:04:36.617 "uuid": "c200dfe8-61a3-4359-b32f-a2ee35ee3ece", 00:04:36.617 "assigned_rate_limits": { 00:04:36.617 "rw_ios_per_sec": 0, 00:04:36.617 "rw_mbytes_per_sec": 0, 00:04:36.617 "r_mbytes_per_sec": 0, 00:04:36.617 "w_mbytes_per_sec": 0 00:04:36.617 }, 00:04:36.617 "claimed": false, 00:04:36.617 "zoned": false, 00:04:36.617 "supported_io_types": { 00:04:36.617 "read": true, 00:04:36.617 "write": true, 00:04:36.617 "unmap": true, 00:04:36.617 "flush": true, 00:04:36.617 "reset": true, 00:04:36.617 "nvme_admin": false, 00:04:36.617 "nvme_io": false, 00:04:36.617 "nvme_io_md": false, 00:04:36.617 "write_zeroes": true, 00:04:36.617 "zcopy": true, 00:04:36.617 "get_zone_info": false, 00:04:36.617 "zone_management": false, 00:04:36.617 "zone_append": false, 00:04:36.617 "compare": false, 00:04:36.617 "compare_and_write": false, 00:04:36.617 "abort": true, 00:04:36.617 "seek_hole": false, 00:04:36.617 "seek_data": false, 00:04:36.617 "copy": true, 00:04:36.617 "nvme_iov_md": false 00:04:36.617 }, 00:04:36.617 "memory_domains": [ 00:04:36.617 { 00:04:36.617 "dma_device_id": "system", 00:04:36.617 "dma_device_type": 1 00:04:36.617 }, 00:04:36.617 { 00:04:36.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.617 "dma_device_type": 2 00:04:36.617 } 00:04:36.617 ], 00:04:36.617 "driver_specific": {} 00:04:36.617 } 00:04:36.617 ]' 00:04:36.617 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:36.876 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.876 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:36.876 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.876 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.876 [2024-11-17 08:04:41.663833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:36.876 [2024-11-17 08:04:41.663894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.876 [2024-11-17 08:04:41.663932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:04:36.876 [2024-11-17 08:04:41.663946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.876 [2024-11-17 08:04:41.666746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.876 [2024-11-17 08:04:41.666789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.876 Passthru0 00:04:36.876 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.876 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.876 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.876 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.876 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.876 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.876 { 00:04:36.876 "name": "Malloc2", 00:04:36.876 "aliases": [ 00:04:36.876 "c200dfe8-61a3-4359-b32f-a2ee35ee3ece" 00:04:36.876 ], 00:04:36.876 "product_name": "Malloc disk", 00:04:36.876 "block_size": 512, 00:04:36.876 "num_blocks": 16384, 00:04:36.876 "uuid": "c200dfe8-61a3-4359-b32f-a2ee35ee3ece", 00:04:36.876 "assigned_rate_limits": { 00:04:36.876 "rw_ios_per_sec": 0, 00:04:36.876 "rw_mbytes_per_sec": 0, 00:04:36.876 "r_mbytes_per_sec": 0, 00:04:36.876 "w_mbytes_per_sec": 0 00:04:36.876 }, 00:04:36.876 "claimed": true, 00:04:36.876 "claim_type": "exclusive_write", 00:04:36.876 "zoned": false, 00:04:36.876 "supported_io_types": { 00:04:36.876 "read": true, 00:04:36.876 "write": true, 00:04:36.876 "unmap": true, 00:04:36.876 "flush": true, 00:04:36.876 "reset": true, 00:04:36.876 "nvme_admin": false, 00:04:36.876 "nvme_io": false, 00:04:36.876 "nvme_io_md": false, 00:04:36.876 "write_zeroes": true, 00:04:36.876 "zcopy": true, 00:04:36.876 "get_zone_info": false, 00:04:36.876 "zone_management": false, 00:04:36.876 "zone_append": false, 00:04:36.876 "compare": false, 00:04:36.876 "compare_and_write": false, 00:04:36.876 "abort": true, 00:04:36.876 "seek_hole": false, 00:04:36.876 "seek_data": false, 00:04:36.876 "copy": true, 00:04:36.876 "nvme_iov_md": false 00:04:36.876 }, 00:04:36.876 "memory_domains": [ 00:04:36.876 { 00:04:36.876 "dma_device_id": "system", 00:04:36.876 "dma_device_type": 1 00:04:36.876 }, 00:04:36.876 { 00:04:36.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.876 "dma_device_type": 2 00:04:36.876 } 00:04:36.876 ], 00:04:36.876 "driver_specific": {} 00:04:36.876 }, 00:04:36.876 { 00:04:36.876 "name": "Passthru0", 00:04:36.876 "aliases": [ 00:04:36.876 "a3121e9a-e007-5252-b3ec-e91a62873891" 00:04:36.876 ], 00:04:36.876 "product_name": "passthru", 00:04:36.876 "block_size": 512, 00:04:36.876 "num_blocks": 16384, 00:04:36.876 "uuid": "a3121e9a-e007-5252-b3ec-e91a62873891", 00:04:36.876 "assigned_rate_limits": { 00:04:36.876 "rw_ios_per_sec": 0, 00:04:36.876 "rw_mbytes_per_sec": 0, 00:04:36.876 "r_mbytes_per_sec": 0, 00:04:36.876 "w_mbytes_per_sec": 0 00:04:36.876 }, 00:04:36.876 "claimed": false, 00:04:36.876 "zoned": false, 00:04:36.876 "supported_io_types": { 00:04:36.876 "read": true, 00:04:36.876 "write": true, 00:04:36.876 "unmap": true, 00:04:36.876 "flush": true, 00:04:36.876 "reset": true, 00:04:36.876 "nvme_admin": false, 00:04:36.876 "nvme_io": false, 00:04:36.876 "nvme_io_md": false, 00:04:36.876 "write_zeroes": true, 00:04:36.876 "zcopy": true, 00:04:36.876 "get_zone_info": false, 00:04:36.876 "zone_management": false, 00:04:36.876 "zone_append": false, 00:04:36.876 "compare": false, 00:04:36.876 "compare_and_write": false, 00:04:36.876 "abort": true, 00:04:36.876 "seek_hole": false, 00:04:36.876 "seek_data": false, 00:04:36.876 "copy": true, 00:04:36.876 "nvme_iov_md": false 00:04:36.876 }, 00:04:36.876 "memory_domains": [ 00:04:36.876 { 00:04:36.876 "dma_device_id": "system", 00:04:36.876 "dma_device_type": 1 00:04:36.876 }, 00:04:36.876 { 00:04:36.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.876 "dma_device_type": 2 00:04:36.877 } 00:04:36.877 ], 00:04:36.877 "driver_specific": { 00:04:36.877 "passthru": { 00:04:36.877 "name": "Passthru0", 00:04:36.877 "base_bdev_name": "Malloc2" 00:04:36.877 } 00:04:36.877 } 00:04:36.877 } 00:04:36.877 ]' 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:36.877 ************************************ 00:04:36.877 END TEST rpc_daemon_integrity 00:04:36.877 ************************************ 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.877 00:04:36.877 real 0m0.348s 00:04:36.877 user 0m0.198s 00:04:36.877 sys 0m0.055s 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.877 08:04:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.137 08:04:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:37.137 08:04:41 rpc -- rpc/rpc.sh@84 -- # killprocess 57434 00:04:37.137 08:04:41 rpc -- common/autotest_common.sh@954 -- # '[' -z 57434 ']' 00:04:37.137 08:04:41 rpc -- common/autotest_common.sh@958 -- # kill -0 57434 00:04:37.137 08:04:41 rpc -- common/autotest_common.sh@959 -- # uname 00:04:37.137 08:04:41 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.137 08:04:41 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57434 00:04:37.137 killing process with pid 57434 00:04:37.137 08:04:41 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.137 08:04:41 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.137 08:04:41 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57434' 00:04:37.137 08:04:41 rpc -- common/autotest_common.sh@973 -- # kill 57434 00:04:37.137 08:04:41 rpc -- common/autotest_common.sh@978 -- # wait 57434 00:04:39.675 00:04:39.675 real 0m5.701s 00:04:39.675 user 0m5.972s 00:04:39.675 sys 0m1.203s 00:04:39.675 08:04:44 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.675 08:04:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.675 ************************************ 00:04:39.675 END TEST rpc 00:04:39.675 ************************************ 00:04:39.675 08:04:44 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:39.675 08:04:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.675 08:04:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.675 08:04:44 -- common/autotest_common.sh@10 -- # set +x 00:04:39.675 ************************************ 00:04:39.675 START TEST skip_rpc 00:04:39.675 ************************************ 00:04:39.675 08:04:44 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:39.934 * Looking for test storage... 00:04:39.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.934 08:04:44 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.934 08:04:44 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.934 08:04:44 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.934 08:04:44 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.934 08:04:44 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:39.934 08:04:44 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.934 08:04:44 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.934 --rc genhtml_branch_coverage=1 00:04:39.934 --rc genhtml_function_coverage=1 00:04:39.934 --rc genhtml_legend=1 00:04:39.934 --rc geninfo_all_blocks=1 00:04:39.934 --rc geninfo_unexecuted_blocks=1 00:04:39.934 00:04:39.934 ' 00:04:39.934 08:04:44 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.934 --rc genhtml_branch_coverage=1 00:04:39.934 --rc genhtml_function_coverage=1 00:04:39.934 --rc genhtml_legend=1 00:04:39.934 --rc geninfo_all_blocks=1 00:04:39.934 --rc geninfo_unexecuted_blocks=1 00:04:39.934 00:04:39.934 ' 00:04:39.934 08:04:44 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.934 --rc genhtml_branch_coverage=1 00:04:39.934 --rc genhtml_function_coverage=1 00:04:39.934 --rc genhtml_legend=1 00:04:39.934 --rc geninfo_all_blocks=1 00:04:39.934 --rc geninfo_unexecuted_blocks=1 00:04:39.934 00:04:39.934 ' 00:04:39.934 08:04:44 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.934 --rc genhtml_branch_coverage=1 00:04:39.934 --rc genhtml_function_coverage=1 00:04:39.934 --rc genhtml_legend=1 00:04:39.934 --rc geninfo_all_blocks=1 00:04:39.934 --rc geninfo_unexecuted_blocks=1 00:04:39.934 00:04:39.934 ' 00:04:39.934 08:04:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:39.934 08:04:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:39.934 08:04:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:39.934 08:04:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.934 08:04:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.934 08:04:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.934 ************************************ 00:04:39.934 START TEST skip_rpc 00:04:39.934 ************************************ 00:04:39.934 08:04:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:39.934 08:04:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57669 00:04:39.934 08:04:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:39.934 08:04:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.934 08:04:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:40.193 [2024-11-17 08:04:44.975288] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:40.193 [2024-11-17 08:04:44.975427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57669 ] 00:04:40.193 [2024-11-17 08:04:45.163867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.452 [2024-11-17 08:04:45.308266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.712 [2024-11-17 08:04:45.605784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:44.905 08:04:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:44.905 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:44.905 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57669 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57669 ']' 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57669 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.906 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57669 00:04:45.165 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.165 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.165 killing process with pid 57669 00:04:45.165 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57669' 00:04:45.165 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57669 00:04:45.165 08:04:49 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57669 00:04:47.702 00:04:47.702 real 0m7.642s 00:04:47.702 user 0m6.957s 00:04:47.702 sys 0m0.606s 00:04:47.702 08:04:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.702 ************************************ 00:04:47.702 END TEST skip_rpc 00:04:47.702 ************************************ 00:04:47.702 08:04:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.702 08:04:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:47.702 08:04:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.702 08:04:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.702 08:04:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.702 ************************************ 00:04:47.702 START TEST skip_rpc_with_json 00:04:47.702 ************************************ 00:04:47.702 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:47.702 08:04:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:47.702 08:04:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57773 00:04:47.702 08:04:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.702 08:04:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.702 08:04:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57773 00:04:47.702 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57773 ']' 00:04:47.702 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.702 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.702 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.702 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.702 08:04:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.702 [2024-11-17 08:04:52.690844] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:47.702 [2024-11-17 08:04:52.690995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57773 ] 00:04:47.961 [2024-11-17 08:04:52.875454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.221 [2024-11-17 08:04:53.013780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.480 [2024-11-17 08:04:53.315177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.049 [2024-11-17 08:04:54.036701] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:49.049 request: 00:04:49.049 { 00:04:49.049 "trtype": "tcp", 00:04:49.049 "method": "nvmf_get_transports", 00:04:49.049 "req_id": 1 00:04:49.049 } 00:04:49.049 Got JSON-RPC error response 00:04:49.049 response: 00:04:49.049 { 00:04:49.049 "code": -19, 00:04:49.049 "message": "No such device" 00:04:49.049 } 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.049 [2024-11-17 08:04:54.048829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.049 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.309 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.309 08:04:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.309 { 00:04:49.309 "subsystems": [ 00:04:49.309 { 00:04:49.309 "subsystem": "fsdev", 00:04:49.309 "config": [ 00:04:49.309 { 00:04:49.309 "method": "fsdev_set_opts", 00:04:49.309 "params": { 00:04:49.309 "fsdev_io_pool_size": 65535, 00:04:49.309 "fsdev_io_cache_size": 256 00:04:49.309 } 00:04:49.309 } 00:04:49.309 ] 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "subsystem": "vfio_user_target", 00:04:49.309 "config": null 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "subsystem": "keyring", 00:04:49.309 "config": [] 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "subsystem": "iobuf", 00:04:49.309 "config": [ 00:04:49.309 { 00:04:49.309 "method": "iobuf_set_options", 00:04:49.309 "params": { 00:04:49.309 "small_pool_count": 8192, 00:04:49.309 "large_pool_count": 1024, 00:04:49.309 "small_bufsize": 8192, 00:04:49.309 "large_bufsize": 135168, 00:04:49.309 "enable_numa": false 00:04:49.309 } 00:04:49.309 } 00:04:49.309 ] 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "subsystem": "sock", 00:04:49.309 "config": [ 00:04:49.309 { 00:04:49.309 "method": "sock_set_default_impl", 00:04:49.309 "params": { 00:04:49.309 "impl_name": "uring" 00:04:49.309 } 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "method": "sock_impl_set_options", 00:04:49.309 "params": { 00:04:49.309 "impl_name": "ssl", 00:04:49.309 "recv_buf_size": 4096, 00:04:49.309 "send_buf_size": 4096, 00:04:49.309 "enable_recv_pipe": true, 00:04:49.309 "enable_quickack": false, 00:04:49.309 "enable_placement_id": 0, 00:04:49.309 "enable_zerocopy_send_server": true, 00:04:49.309 "enable_zerocopy_send_client": false, 00:04:49.309 "zerocopy_threshold": 0, 00:04:49.309 "tls_version": 0, 00:04:49.309 "enable_ktls": false 00:04:49.309 } 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "method": "sock_impl_set_options", 00:04:49.309 "params": { 00:04:49.309 "impl_name": "posix", 00:04:49.309 "recv_buf_size": 2097152, 00:04:49.309 "send_buf_size": 2097152, 00:04:49.309 "enable_recv_pipe": true, 00:04:49.309 "enable_quickack": false, 00:04:49.309 "enable_placement_id": 0, 00:04:49.309 "enable_zerocopy_send_server": true, 00:04:49.309 "enable_zerocopy_send_client": false, 00:04:49.309 "zerocopy_threshold": 0, 00:04:49.309 "tls_version": 0, 00:04:49.309 "enable_ktls": false 00:04:49.309 } 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "method": "sock_impl_set_options", 00:04:49.309 "params": { 00:04:49.309 "impl_name": "uring", 00:04:49.309 "recv_buf_size": 2097152, 00:04:49.309 "send_buf_size": 2097152, 00:04:49.309 "enable_recv_pipe": true, 00:04:49.309 "enable_quickack": false, 00:04:49.309 "enable_placement_id": 0, 00:04:49.309 "enable_zerocopy_send_server": false, 00:04:49.309 "enable_zerocopy_send_client": false, 00:04:49.309 "zerocopy_threshold": 0, 00:04:49.309 "tls_version": 0, 00:04:49.309 "enable_ktls": false 00:04:49.309 } 00:04:49.309 } 00:04:49.309 ] 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "subsystem": "vmd", 00:04:49.309 "config": [] 00:04:49.309 }, 00:04:49.309 { 00:04:49.309 "subsystem": "accel", 00:04:49.309 "config": [ 00:04:49.309 { 00:04:49.309 "method": "accel_set_options", 00:04:49.309 "params": { 00:04:49.309 "small_cache_size": 128, 00:04:49.309 "large_cache_size": 16, 00:04:49.309 "task_count": 2048, 00:04:49.309 "sequence_count": 2048, 00:04:49.309 "buf_count": 2048 00:04:49.309 } 00:04:49.309 } 00:04:49.309 ] 00:04:49.309 }, 00:04:49.309 { 00:04:49.310 "subsystem": "bdev", 00:04:49.310 "config": [ 00:04:49.310 { 00:04:49.310 "method": "bdev_set_options", 00:04:49.310 "params": { 00:04:49.310 "bdev_io_pool_size": 65535, 00:04:49.310 "bdev_io_cache_size": 256, 00:04:49.310 "bdev_auto_examine": true, 00:04:49.310 "iobuf_small_cache_size": 128, 00:04:49.310 "iobuf_large_cache_size": 16 00:04:49.310 } 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "method": "bdev_raid_set_options", 00:04:49.310 "params": { 00:04:49.310 "process_window_size_kb": 1024, 00:04:49.310 "process_max_bandwidth_mb_sec": 0 00:04:49.310 } 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "method": "bdev_iscsi_set_options", 00:04:49.310 "params": { 00:04:49.310 "timeout_sec": 30 00:04:49.310 } 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "method": "bdev_nvme_set_options", 00:04:49.310 "params": { 00:04:49.310 "action_on_timeout": "none", 00:04:49.310 "timeout_us": 0, 00:04:49.310 "timeout_admin_us": 0, 00:04:49.310 "keep_alive_timeout_ms": 10000, 00:04:49.310 "arbitration_burst": 0, 00:04:49.310 "low_priority_weight": 0, 00:04:49.310 "medium_priority_weight": 0, 00:04:49.310 "high_priority_weight": 0, 00:04:49.310 "nvme_adminq_poll_period_us": 10000, 00:04:49.310 "nvme_ioq_poll_period_us": 0, 00:04:49.310 "io_queue_requests": 0, 00:04:49.310 "delay_cmd_submit": true, 00:04:49.310 "transport_retry_count": 4, 00:04:49.310 "bdev_retry_count": 3, 00:04:49.310 "transport_ack_timeout": 0, 00:04:49.310 "ctrlr_loss_timeout_sec": 0, 00:04:49.310 "reconnect_delay_sec": 0, 00:04:49.310 "fast_io_fail_timeout_sec": 0, 00:04:49.310 "disable_auto_failback": false, 00:04:49.310 "generate_uuids": false, 00:04:49.310 "transport_tos": 0, 00:04:49.310 "nvme_error_stat": false, 00:04:49.310 "rdma_srq_size": 0, 00:04:49.310 "io_path_stat": false, 00:04:49.310 "allow_accel_sequence": false, 00:04:49.310 "rdma_max_cq_size": 0, 00:04:49.310 "rdma_cm_event_timeout_ms": 0, 00:04:49.310 "dhchap_digests": [ 00:04:49.310 "sha256", 00:04:49.310 "sha384", 00:04:49.310 "sha512" 00:04:49.310 ], 00:04:49.310 "dhchap_dhgroups": [ 00:04:49.310 "null", 00:04:49.310 "ffdhe2048", 00:04:49.310 "ffdhe3072", 00:04:49.310 "ffdhe4096", 00:04:49.310 "ffdhe6144", 00:04:49.310 "ffdhe8192" 00:04:49.310 ] 00:04:49.310 } 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "method": "bdev_nvme_set_hotplug", 00:04:49.310 "params": { 00:04:49.310 "period_us": 100000, 00:04:49.310 "enable": false 00:04:49.310 } 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "method": "bdev_wait_for_examine" 00:04:49.310 } 00:04:49.310 ] 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "subsystem": "scsi", 00:04:49.310 "config": null 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "subsystem": "scheduler", 00:04:49.310 "config": [ 00:04:49.310 { 00:04:49.310 "method": "framework_set_scheduler", 00:04:49.310 "params": { 00:04:49.310 "name": "static" 00:04:49.310 } 00:04:49.310 } 00:04:49.310 ] 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "subsystem": "vhost_scsi", 00:04:49.310 "config": [] 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "subsystem": "vhost_blk", 00:04:49.310 "config": [] 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "subsystem": "ublk", 00:04:49.310 "config": [] 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "subsystem": "nbd", 00:04:49.310 "config": [] 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "subsystem": "nvmf", 00:04:49.310 "config": [ 00:04:49.310 { 00:04:49.310 "method": "nvmf_set_config", 00:04:49.310 "params": { 00:04:49.310 "discovery_filter": "match_any", 00:04:49.310 "admin_cmd_passthru": { 00:04:49.310 "identify_ctrlr": false 00:04:49.310 }, 00:04:49.310 "dhchap_digests": [ 00:04:49.310 "sha256", 00:04:49.310 "sha384", 00:04:49.310 "sha512" 00:04:49.310 ], 00:04:49.310 "dhchap_dhgroups": [ 00:04:49.310 "null", 00:04:49.310 "ffdhe2048", 00:04:49.310 "ffdhe3072", 00:04:49.310 "ffdhe4096", 00:04:49.310 "ffdhe6144", 00:04:49.310 "ffdhe8192" 00:04:49.310 ] 00:04:49.310 } 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "method": "nvmf_set_max_subsystems", 00:04:49.310 "params": { 00:04:49.310 "max_subsystems": 1024 00:04:49.310 } 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "method": "nvmf_set_crdt", 00:04:49.310 "params": { 00:04:49.310 "crdt1": 0, 00:04:49.310 "crdt2": 0, 00:04:49.310 "crdt3": 0 00:04:49.310 } 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "method": "nvmf_create_transport", 00:04:49.310 "params": { 00:04:49.310 "trtype": "TCP", 00:04:49.310 "max_queue_depth": 128, 00:04:49.310 "max_io_qpairs_per_ctrlr": 127, 00:04:49.310 "in_capsule_data_size": 4096, 00:04:49.310 "max_io_size": 131072, 00:04:49.310 "io_unit_size": 131072, 00:04:49.310 "max_aq_depth": 128, 00:04:49.310 "num_shared_buffers": 511, 00:04:49.310 "buf_cache_size": 4294967295, 00:04:49.310 "dif_insert_or_strip": false, 00:04:49.310 "zcopy": false, 00:04:49.310 "c2h_success": true, 00:04:49.310 "sock_priority": 0, 00:04:49.310 "abort_timeout_sec": 1, 00:04:49.310 "ack_timeout": 0, 00:04:49.310 "data_wr_pool_size": 0 00:04:49.310 } 00:04:49.310 } 00:04:49.310 ] 00:04:49.310 }, 00:04:49.310 { 00:04:49.310 "subsystem": "iscsi", 00:04:49.310 "config": [ 00:04:49.310 { 00:04:49.310 "method": "iscsi_set_options", 00:04:49.310 "params": { 00:04:49.310 "node_base": "iqn.2016-06.io.spdk", 00:04:49.310 "max_sessions": 128, 00:04:49.310 "max_connections_per_session": 2, 00:04:49.310 "max_queue_depth": 64, 00:04:49.310 "default_time2wait": 2, 00:04:49.310 "default_time2retain": 20, 00:04:49.310 "first_burst_length": 8192, 00:04:49.310 "immediate_data": true, 00:04:49.310 "allow_duplicated_isid": false, 00:04:49.310 "error_recovery_level": 0, 00:04:49.310 "nop_timeout": 60, 00:04:49.310 "nop_in_interval": 30, 00:04:49.311 "disable_chap": false, 00:04:49.311 "require_chap": false, 00:04:49.311 "mutual_chap": false, 00:04:49.311 "chap_group": 0, 00:04:49.311 "max_large_datain_per_connection": 64, 00:04:49.311 "max_r2t_per_connection": 4, 00:04:49.311 "pdu_pool_size": 36864, 00:04:49.311 "immediate_data_pool_size": 16384, 00:04:49.311 "data_out_pool_size": 2048 00:04:49.311 } 00:04:49.311 } 00:04:49.311 ] 00:04:49.311 } 00:04:49.311 ] 00:04:49.311 } 00:04:49.311 08:04:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:49.311 08:04:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57773 00:04:49.311 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57773 ']' 00:04:49.311 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57773 00:04:49.311 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:49.311 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.311 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57773 00:04:49.311 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.311 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.311 killing process with pid 57773 00:04:49.311 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57773' 00:04:49.311 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57773 00:04:49.311 08:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57773 00:04:52.603 08:04:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57833 00:04:52.603 08:04:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:52.603 08:04:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:57.901 08:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57833 00:04:57.901 08:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57833 ']' 00:04:57.901 08:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57833 00:04:57.901 08:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:57.901 08:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.901 08:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57833 00:04:57.901 08:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.901 08:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.901 killing process with pid 57833 00:04:57.901 08:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57833' 00:04:57.901 08:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57833 00:04:57.901 08:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57833 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:59.808 00:04:59.808 real 0m11.940s 00:04:59.808 user 0m10.990s 00:04:59.808 sys 0m1.281s 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.808 ************************************ 00:04:59.808 END TEST skip_rpc_with_json 00:04:59.808 ************************************ 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.808 08:05:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:59.808 08:05:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.808 08:05:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.808 08:05:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.808 ************************************ 00:04:59.808 START TEST skip_rpc_with_delay 00:04:59.808 ************************************ 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.808 [2024-11-17 08:05:04.711451] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.808 00:04:59.808 real 0m0.199s 00:04:59.808 user 0m0.098s 00:04:59.808 sys 0m0.100s 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.808 08:05:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:59.808 ************************************ 00:04:59.808 END TEST skip_rpc_with_delay 00:04:59.808 ************************************ 00:05:00.068 08:05:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:00.068 08:05:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:00.068 08:05:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:00.068 08:05:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.068 08:05:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.068 08:05:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.068 ************************************ 00:05:00.068 START TEST exit_on_failed_rpc_init 00:05:00.068 ************************************ 00:05:00.068 08:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:00.068 08:05:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57968 00:05:00.068 08:05:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.068 08:05:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57968 00:05:00.068 08:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57968 ']' 00:05:00.068 08:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.068 08:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.068 08:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.068 08:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.068 08:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.068 [2024-11-17 08:05:04.983737] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:00.068 [2024-11-17 08:05:04.983909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57968 ] 00:05:00.327 [2024-11-17 08:05:05.167421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.327 [2024-11-17 08:05:05.313738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.910 [2024-11-17 08:05:05.596945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:01.478 08:05:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.478 [2024-11-17 08:05:06.432176] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:01.478 [2024-11-17 08:05:06.432319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57996 ] 00:05:01.738 [2024-11-17 08:05:06.618868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.997 [2024-11-17 08:05:06.756226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.997 [2024-11-17 08:05:06.756359] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:01.997 [2024-11-17 08:05:06.756378] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:01.997 [2024-11-17 08:05:06.756396] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57968 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57968 ']' 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57968 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57968 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.257 killing process with pid 57968 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57968' 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57968 00:05:02.257 08:05:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57968 00:05:04.794 00:05:04.794 real 0m4.808s 00:05:04.794 user 0m4.926s 00:05:04.794 sys 0m0.870s 00:05:04.794 08:05:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.794 08:05:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.794 ************************************ 00:05:04.794 END TEST exit_on_failed_rpc_init 00:05:04.794 ************************************ 00:05:04.794 08:05:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.794 00:05:04.794 real 0m25.125s 00:05:04.794 user 0m23.176s 00:05:04.794 sys 0m3.192s 00:05:04.794 08:05:09 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.794 08:05:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.794 ************************************ 00:05:04.794 END TEST skip_rpc 00:05:04.794 ************************************ 00:05:04.794 08:05:09 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:04.794 08:05:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.794 08:05:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.794 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:05.053 ************************************ 00:05:05.053 START TEST rpc_client 00:05:05.053 ************************************ 00:05:05.053 08:05:09 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:05.053 * Looking for test storage... 00:05:05.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:05.053 08:05:09 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.053 08:05:09 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.053 08:05:09 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.053 08:05:10 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.053 08:05:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:05.053 08:05:10 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.053 08:05:10 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.053 --rc genhtml_branch_coverage=1 00:05:05.053 --rc genhtml_function_coverage=1 00:05:05.053 --rc genhtml_legend=1 00:05:05.053 --rc geninfo_all_blocks=1 00:05:05.053 --rc geninfo_unexecuted_blocks=1 00:05:05.053 00:05:05.053 ' 00:05:05.053 08:05:10 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.053 --rc genhtml_branch_coverage=1 00:05:05.053 --rc genhtml_function_coverage=1 00:05:05.053 --rc genhtml_legend=1 00:05:05.053 --rc geninfo_all_blocks=1 00:05:05.053 --rc geninfo_unexecuted_blocks=1 00:05:05.053 00:05:05.053 ' 00:05:05.053 08:05:10 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.053 --rc genhtml_branch_coverage=1 00:05:05.053 --rc genhtml_function_coverage=1 00:05:05.053 --rc genhtml_legend=1 00:05:05.053 --rc geninfo_all_blocks=1 00:05:05.053 --rc geninfo_unexecuted_blocks=1 00:05:05.053 00:05:05.053 ' 00:05:05.053 08:05:10 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.053 --rc genhtml_branch_coverage=1 00:05:05.053 --rc genhtml_function_coverage=1 00:05:05.053 --rc genhtml_legend=1 00:05:05.053 --rc geninfo_all_blocks=1 00:05:05.053 --rc geninfo_unexecuted_blocks=1 00:05:05.053 00:05:05.053 ' 00:05:05.053 08:05:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:05.313 OK 00:05:05.313 08:05:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:05.313 00:05:05.313 real 0m0.313s 00:05:05.313 user 0m0.155s 00:05:05.313 sys 0m0.177s 00:05:05.313 08:05:10 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.313 08:05:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:05.313 ************************************ 00:05:05.313 END TEST rpc_client 00:05:05.313 ************************************ 00:05:05.313 08:05:10 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:05.313 08:05:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.313 08:05:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.313 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:05.313 ************************************ 00:05:05.313 START TEST json_config 00:05:05.313 ************************************ 00:05:05.313 08:05:10 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:05.313 08:05:10 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.313 08:05:10 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.313 08:05:10 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.572 08:05:10 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.572 08:05:10 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.572 08:05:10 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.572 08:05:10 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.572 08:05:10 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.572 08:05:10 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.572 08:05:10 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.572 08:05:10 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.572 08:05:10 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.572 08:05:10 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.572 08:05:10 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.572 08:05:10 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.572 08:05:10 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:05.572 08:05:10 json_config -- scripts/common.sh@345 -- # : 1 00:05:05.572 08:05:10 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.572 08:05:10 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.572 08:05:10 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:05.572 08:05:10 json_config -- scripts/common.sh@353 -- # local d=1 00:05:05.572 08:05:10 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.572 08:05:10 json_config -- scripts/common.sh@355 -- # echo 1 00:05:05.572 08:05:10 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.572 08:05:10 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:05.573 08:05:10 json_config -- scripts/common.sh@353 -- # local d=2 00:05:05.573 08:05:10 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.573 08:05:10 json_config -- scripts/common.sh@355 -- # echo 2 00:05:05.573 08:05:10 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.573 08:05:10 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.573 08:05:10 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.573 08:05:10 json_config -- scripts/common.sh@368 -- # return 0 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.573 --rc genhtml_branch_coverage=1 00:05:05.573 --rc genhtml_function_coverage=1 00:05:05.573 --rc genhtml_legend=1 00:05:05.573 --rc geninfo_all_blocks=1 00:05:05.573 --rc geninfo_unexecuted_blocks=1 00:05:05.573 00:05:05.573 ' 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.573 --rc genhtml_branch_coverage=1 00:05:05.573 --rc genhtml_function_coverage=1 00:05:05.573 --rc genhtml_legend=1 00:05:05.573 --rc geninfo_all_blocks=1 00:05:05.573 --rc geninfo_unexecuted_blocks=1 00:05:05.573 00:05:05.573 ' 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.573 --rc genhtml_branch_coverage=1 00:05:05.573 --rc genhtml_function_coverage=1 00:05:05.573 --rc genhtml_legend=1 00:05:05.573 --rc geninfo_all_blocks=1 00:05:05.573 --rc geninfo_unexecuted_blocks=1 00:05:05.573 00:05:05.573 ' 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.573 --rc genhtml_branch_coverage=1 00:05:05.573 --rc genhtml_function_coverage=1 00:05:05.573 --rc genhtml_legend=1 00:05:05.573 --rc geninfo_all_blocks=1 00:05:05.573 --rc geninfo_unexecuted_blocks=1 00:05:05.573 00:05:05.573 ' 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:05.573 08:05:10 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.573 08:05:10 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.573 08:05:10 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.573 08:05:10 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.573 08:05:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.573 08:05:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.573 08:05:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.573 08:05:10 json_config -- paths/export.sh@5 -- # export PATH 00:05:05.573 08:05:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@51 -- # : 0 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:05.573 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:05.573 08:05:10 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:05.573 INFO: JSON configuration test init 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.573 Waiting for target to run... 00:05:05.573 08:05:10 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:05.573 08:05:10 json_config -- json_config/common.sh@9 -- # local app=target 00:05:05.573 08:05:10 json_config -- json_config/common.sh@10 -- # shift 00:05:05.573 08:05:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.573 08:05:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.573 08:05:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.573 08:05:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.573 08:05:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.573 08:05:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58162 00:05:05.573 08:05:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.573 08:05:10 json_config -- json_config/common.sh@25 -- # waitforlisten 58162 /var/tmp/spdk_tgt.sock 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@835 -- # '[' -z 58162 ']' 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.573 08:05:10 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.573 08:05:10 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.574 08:05:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.574 [2024-11-17 08:05:10.561768] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:05.574 [2024-11-17 08:05:10.562055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58162 ] 00:05:06.511 [2024-11-17 08:05:11.162985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.511 [2024-11-17 08:05:11.293435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.511 08:05:11 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.511 08:05:11 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:06.511 08:05:11 json_config -- json_config/common.sh@26 -- # echo '' 00:05:06.511 00:05:06.511 08:05:11 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:06.511 08:05:11 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:06.511 08:05:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.511 08:05:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.511 08:05:11 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:06.511 08:05:11 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:06.511 08:05:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.511 08:05:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.511 08:05:11 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:06.511 08:05:11 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:06.511 08:05:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:07.080 [2024-11-17 08:05:11.849740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:07.649 08:05:12 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:07.649 08:05:12 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:07.649 08:05:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.649 08:05:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.649 08:05:12 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:07.649 08:05:12 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:07.649 08:05:12 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:07.649 08:05:12 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:07.649 08:05:12 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:07.649 08:05:12 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:07.649 08:05:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:07.649 08:05:12 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@54 -- # sort 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:07.908 08:05:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.908 08:05:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:07.908 08:05:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.908 08:05:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:07.908 08:05:12 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:07.908 08:05:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:08.167 MallocForNvmf0 00:05:08.167 08:05:12 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.167 08:05:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:08.427 MallocForNvmf1 00:05:08.427 08:05:13 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:08.427 08:05:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:08.427 [2024-11-17 08:05:13.386324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.427 08:05:13 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:08.427 08:05:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:08.686 08:05:13 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:08.686 08:05:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:08.945 08:05:13 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:08.945 08:05:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.204 08:05:13 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.204 08:05:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:09.204 [2024-11-17 08:05:14.145769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.204 08:05:14 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:09.204 08:05:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.204 08:05:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.464 08:05:14 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:09.464 08:05:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.464 08:05:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.464 08:05:14 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:09.464 08:05:14 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.464 08:05:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:09.464 MallocBdevForConfigChangeCheck 00:05:09.723 08:05:14 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:09.723 08:05:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.723 08:05:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.723 08:05:14 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:09.723 08:05:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.983 INFO: shutting down applications... 00:05:09.983 08:05:14 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:09.983 08:05:14 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:09.983 08:05:14 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:09.983 08:05:14 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:09.983 08:05:14 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:10.242 Calling clear_iscsi_subsystem 00:05:10.242 Calling clear_nvmf_subsystem 00:05:10.242 Calling clear_nbd_subsystem 00:05:10.242 Calling clear_ublk_subsystem 00:05:10.242 Calling clear_vhost_blk_subsystem 00:05:10.242 Calling clear_vhost_scsi_subsystem 00:05:10.242 Calling clear_bdev_subsystem 00:05:10.242 08:05:15 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:10.242 08:05:15 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:10.242 08:05:15 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:10.242 08:05:15 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.242 08:05:15 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:10.242 08:05:15 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:10.811 08:05:15 json_config -- json_config/json_config.sh@352 -- # break 00:05:10.811 08:05:15 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:10.811 08:05:15 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:10.811 08:05:15 json_config -- json_config/common.sh@31 -- # local app=target 00:05:10.811 08:05:15 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.811 08:05:15 json_config -- json_config/common.sh@35 -- # [[ -n 58162 ]] 00:05:10.811 08:05:15 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58162 00:05:10.811 08:05:15 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.811 08:05:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.811 08:05:15 json_config -- json_config/common.sh@41 -- # kill -0 58162 00:05:10.811 08:05:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.380 08:05:16 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.380 08:05:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.380 08:05:16 json_config -- json_config/common.sh@41 -- # kill -0 58162 00:05:11.380 08:05:16 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.639 08:05:16 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.639 08:05:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.639 08:05:16 json_config -- json_config/common.sh@41 -- # kill -0 58162 00:05:11.639 08:05:16 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.209 08:05:17 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.209 08:05:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.209 SPDK target shutdown done 00:05:12.209 INFO: relaunching applications... 00:05:12.209 08:05:17 json_config -- json_config/common.sh@41 -- # kill -0 58162 00:05:12.209 08:05:17 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.209 08:05:17 json_config -- json_config/common.sh@43 -- # break 00:05:12.209 08:05:17 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.209 08:05:17 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.209 08:05:17 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:12.209 08:05:17 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:12.209 08:05:17 json_config -- json_config/common.sh@9 -- # local app=target 00:05:12.209 08:05:17 json_config -- json_config/common.sh@10 -- # shift 00:05:12.209 08:05:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.209 08:05:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.209 08:05:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.209 08:05:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.209 08:05:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.209 08:05:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58371 00:05:12.209 08:05:17 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:12.209 08:05:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.209 Waiting for target to run... 00:05:12.209 08:05:17 json_config -- json_config/common.sh@25 -- # waitforlisten 58371 /var/tmp/spdk_tgt.sock 00:05:12.209 08:05:17 json_config -- common/autotest_common.sh@835 -- # '[' -z 58371 ']' 00:05:12.209 08:05:17 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.209 08:05:17 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.209 08:05:17 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.209 08:05:17 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.209 08:05:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.468 [2024-11-17 08:05:17.264226] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:12.468 [2024-11-17 08:05:17.264371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58371 ] 00:05:13.037 [2024-11-17 08:05:17.854366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.037 [2024-11-17 08:05:17.987350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.607 [2024-11-17 08:05:18.318739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.177 [2024-11-17 08:05:18.986743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.177 [2024-11-17 08:05:19.018883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:14.177 00:05:14.177 INFO: Checking if target configuration is the same... 00:05:14.177 08:05:19 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.177 08:05:19 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:14.177 08:05:19 json_config -- json_config/common.sh@26 -- # echo '' 00:05:14.178 08:05:19 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:14.178 08:05:19 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:14.178 08:05:19 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.178 08:05:19 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:14.178 08:05:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.178 + '[' 2 -ne 2 ']' 00:05:14.178 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:14.178 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:14.178 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:14.178 +++ basename /dev/fd/62 00:05:14.178 ++ mktemp /tmp/62.XXX 00:05:14.178 + tmp_file_1=/tmp/62.ANt 00:05:14.178 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.178 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.178 + tmp_file_2=/tmp/spdk_tgt_config.json.KqE 00:05:14.178 + ret=0 00:05:14.178 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:14.437 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:14.696 + diff -u /tmp/62.ANt /tmp/spdk_tgt_config.json.KqE 00:05:14.696 INFO: JSON config files are the same 00:05:14.696 + echo 'INFO: JSON config files are the same' 00:05:14.696 + rm /tmp/62.ANt /tmp/spdk_tgt_config.json.KqE 00:05:14.696 + exit 0 00:05:14.696 INFO: changing configuration and checking if this can be detected... 00:05:14.696 08:05:19 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:14.696 08:05:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:14.696 08:05:19 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.696 08:05:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.956 08:05:19 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:14.956 08:05:19 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.956 08:05:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.956 + '[' 2 -ne 2 ']' 00:05:14.956 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:14.956 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:14.956 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:14.956 +++ basename /dev/fd/62 00:05:14.956 ++ mktemp /tmp/62.XXX 00:05:14.956 + tmp_file_1=/tmp/62.fUS 00:05:14.956 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:14.956 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.956 + tmp_file_2=/tmp/spdk_tgt_config.json.FQi 00:05:14.956 + ret=0 00:05:14.956 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:15.215 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:15.215 + diff -u /tmp/62.fUS /tmp/spdk_tgt_config.json.FQi 00:05:15.215 + ret=1 00:05:15.215 + echo '=== Start of file: /tmp/62.fUS ===' 00:05:15.215 + cat /tmp/62.fUS 00:05:15.215 + echo '=== End of file: /tmp/62.fUS ===' 00:05:15.215 + echo '' 00:05:15.215 + echo '=== Start of file: /tmp/spdk_tgt_config.json.FQi ===' 00:05:15.215 + cat /tmp/spdk_tgt_config.json.FQi 00:05:15.215 + echo '=== End of file: /tmp/spdk_tgt_config.json.FQi ===' 00:05:15.215 + echo '' 00:05:15.215 + rm /tmp/62.fUS /tmp/spdk_tgt_config.json.FQi 00:05:15.215 + exit 1 00:05:15.215 INFO: configuration change detected. 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:15.215 08:05:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.215 08:05:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@324 -- # [[ -n 58371 ]] 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:15.215 08:05:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.215 08:05:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:15.215 08:05:20 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:15.215 08:05:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.215 08:05:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.475 08:05:20 json_config -- json_config/json_config.sh@330 -- # killprocess 58371 00:05:15.475 08:05:20 json_config -- common/autotest_common.sh@954 -- # '[' -z 58371 ']' 00:05:15.475 08:05:20 json_config -- common/autotest_common.sh@958 -- # kill -0 58371 00:05:15.475 08:05:20 json_config -- common/autotest_common.sh@959 -- # uname 00:05:15.475 08:05:20 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.475 08:05:20 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58371 00:05:15.475 killing process with pid 58371 00:05:15.475 08:05:20 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.475 08:05:20 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.475 08:05:20 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58371' 00:05:15.475 08:05:20 json_config -- common/autotest_common.sh@973 -- # kill 58371 00:05:15.475 08:05:20 json_config -- common/autotest_common.sh@978 -- # wait 58371 00:05:16.414 08:05:21 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:16.415 08:05:21 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:16.415 08:05:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.415 08:05:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 INFO: Success 00:05:16.677 08:05:21 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:16.677 08:05:21 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:16.677 ************************************ 00:05:16.677 END TEST json_config 00:05:16.677 ************************************ 00:05:16.677 00:05:16.677 real 0m11.246s 00:05:16.677 user 0m13.095s 00:05:16.677 sys 0m2.739s 00:05:16.677 08:05:21 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.677 08:05:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 08:05:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:16.677 08:05:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.677 08:05:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.677 08:05:21 -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 ************************************ 00:05:16.677 START TEST json_config_extra_key 00:05:16.677 ************************************ 00:05:16.677 08:05:21 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:16.677 08:05:21 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:16.677 08:05:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:16.677 08:05:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:16.677 08:05:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:16.677 08:05:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:16.937 08:05:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.937 08:05:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:16.937 08:05:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.937 08:05:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.937 08:05:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.937 08:05:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:16.937 08:05:21 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.937 08:05:21 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:16.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.937 --rc genhtml_branch_coverage=1 00:05:16.937 --rc genhtml_function_coverage=1 00:05:16.937 --rc genhtml_legend=1 00:05:16.937 --rc geninfo_all_blocks=1 00:05:16.937 --rc geninfo_unexecuted_blocks=1 00:05:16.937 00:05:16.937 ' 00:05:16.937 08:05:21 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:16.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.937 --rc genhtml_branch_coverage=1 00:05:16.937 --rc genhtml_function_coverage=1 00:05:16.937 --rc genhtml_legend=1 00:05:16.937 --rc geninfo_all_blocks=1 00:05:16.937 --rc geninfo_unexecuted_blocks=1 00:05:16.937 00:05:16.937 ' 00:05:16.937 08:05:21 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:16.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.937 --rc genhtml_branch_coverage=1 00:05:16.937 --rc genhtml_function_coverage=1 00:05:16.937 --rc genhtml_legend=1 00:05:16.937 --rc geninfo_all_blocks=1 00:05:16.937 --rc geninfo_unexecuted_blocks=1 00:05:16.937 00:05:16.937 ' 00:05:16.937 08:05:21 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:16.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.937 --rc genhtml_branch_coverage=1 00:05:16.937 --rc genhtml_function_coverage=1 00:05:16.937 --rc genhtml_legend=1 00:05:16.937 --rc geninfo_all_blocks=1 00:05:16.937 --rc geninfo_unexecuted_blocks=1 00:05:16.937 00:05:16.937 ' 00:05:16.937 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.937 08:05:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:16.938 08:05:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:16.938 08:05:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.938 08:05:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.938 08:05:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.938 08:05:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.938 08:05:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.938 08:05:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.938 08:05:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:16.938 08:05:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:16.938 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:16.938 08:05:21 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:16.938 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:16.938 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:16.938 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:16.938 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:16.938 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:16.938 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:16.938 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:16.938 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:16.938 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:16.938 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:16.938 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:16.938 INFO: launching applications... 00:05:16.938 08:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:16.938 08:05:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:16.938 08:05:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:16.938 08:05:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:16.938 08:05:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:16.938 08:05:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:16.938 08:05:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.938 08:05:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.938 08:05:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58545 00:05:16.938 08:05:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.938 Waiting for target to run... 00:05:16.938 08:05:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58545 /var/tmp/spdk_tgt.sock 00:05:16.938 08:05:21 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:16.938 08:05:21 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58545 ']' 00:05:16.938 08:05:21 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.938 08:05:21 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.938 08:05:21 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.938 08:05:21 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.938 08:05:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:16.938 [2024-11-17 08:05:21.865597] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:16.938 [2024-11-17 08:05:21.865968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58545 ] 00:05:17.508 [2024-11-17 08:05:22.453598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.767 [2024-11-17 08:05:22.580361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.026 [2024-11-17 08:05:22.808623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:18.595 08:05:23 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.595 08:05:23 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:18.595 00:05:18.595 INFO: shutting down applications... 00:05:18.595 08:05:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:18.595 08:05:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:18.595 08:05:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:18.595 08:05:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:18.595 08:05:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:18.595 08:05:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58545 ]] 00:05:18.595 08:05:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58545 00:05:18.595 08:05:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:18.595 08:05:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.595 08:05:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58545 00:05:18.595 08:05:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.855 08:05:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.855 08:05:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.855 08:05:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58545 00:05:18.855 08:05:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.424 08:05:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.424 08:05:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.424 08:05:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58545 00:05:19.424 08:05:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.993 08:05:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.993 08:05:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.993 08:05:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58545 00:05:19.993 08:05:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.562 08:05:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.562 08:05:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.562 08:05:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58545 00:05:20.562 08:05:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.131 08:05:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.131 08:05:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.131 08:05:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58545 00:05:21.131 08:05:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.390 08:05:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.390 08:05:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.390 08:05:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58545 00:05:21.390 08:05:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.390 08:05:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:21.390 08:05:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.390 SPDK target shutdown done 00:05:21.390 08:05:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.390 Success 00:05:21.390 08:05:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:21.390 00:05:21.390 real 0m4.854s 00:05:21.390 user 0m4.275s 00:05:21.390 sys 0m0.848s 00:05:21.390 08:05:26 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.390 08:05:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.390 ************************************ 00:05:21.390 END TEST json_config_extra_key 00:05:21.390 ************************************ 00:05:21.648 08:05:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.648 08:05:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.648 08:05:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.648 08:05:26 -- common/autotest_common.sh@10 -- # set +x 00:05:21.648 ************************************ 00:05:21.648 START TEST alias_rpc 00:05:21.648 ************************************ 00:05:21.648 08:05:26 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.648 * Looking for test storage... 00:05:21.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:21.648 08:05:26 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.648 08:05:26 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.648 08:05:26 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.648 08:05:26 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:21.648 08:05:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.649 08:05:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.909 08:05:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:21.909 08:05:26 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.909 08:05:26 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.909 --rc genhtml_branch_coverage=1 00:05:21.909 --rc genhtml_function_coverage=1 00:05:21.909 --rc genhtml_legend=1 00:05:21.909 --rc geninfo_all_blocks=1 00:05:21.909 --rc geninfo_unexecuted_blocks=1 00:05:21.909 00:05:21.909 ' 00:05:21.909 08:05:26 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.909 --rc genhtml_branch_coverage=1 00:05:21.909 --rc genhtml_function_coverage=1 00:05:21.909 --rc genhtml_legend=1 00:05:21.909 --rc geninfo_all_blocks=1 00:05:21.909 --rc geninfo_unexecuted_blocks=1 00:05:21.909 00:05:21.909 ' 00:05:21.909 08:05:26 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.909 --rc genhtml_branch_coverage=1 00:05:21.909 --rc genhtml_function_coverage=1 00:05:21.909 --rc genhtml_legend=1 00:05:21.909 --rc geninfo_all_blocks=1 00:05:21.909 --rc geninfo_unexecuted_blocks=1 00:05:21.909 00:05:21.909 ' 00:05:21.909 08:05:26 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.909 --rc genhtml_branch_coverage=1 00:05:21.909 --rc genhtml_function_coverage=1 00:05:21.909 --rc genhtml_legend=1 00:05:21.909 --rc geninfo_all_blocks=1 00:05:21.909 --rc geninfo_unexecuted_blocks=1 00:05:21.909 00:05:21.909 ' 00:05:21.909 08:05:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.909 08:05:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58656 00:05:21.909 08:05:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.909 08:05:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58656 00:05:21.909 08:05:26 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58656 ']' 00:05:21.909 08:05:26 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.909 08:05:26 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.909 08:05:26 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.909 08:05:26 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.909 08:05:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.909 [2024-11-17 08:05:26.780353] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:21.909 [2024-11-17 08:05:26.780520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58656 ] 00:05:22.168 [2024-11-17 08:05:26.969020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.168 [2024-11-17 08:05:27.098115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.428 [2024-11-17 08:05:27.392185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.453 08:05:28 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.453 08:05:28 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.453 08:05:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:23.453 08:05:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58656 00:05:23.453 08:05:28 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58656 ']' 00:05:23.453 08:05:28 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58656 00:05:23.453 08:05:28 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:23.453 08:05:28 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.453 08:05:28 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58656 00:05:23.453 08:05:28 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.453 killing process with pid 58656 00:05:23.453 08:05:28 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.453 08:05:28 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58656' 00:05:23.453 08:05:28 alias_rpc -- common/autotest_common.sh@973 -- # kill 58656 00:05:23.453 08:05:28 alias_rpc -- common/autotest_common.sh@978 -- # wait 58656 00:05:25.990 00:05:25.990 real 0m4.558s 00:05:25.990 user 0m4.318s 00:05:25.990 sys 0m0.814s 00:05:25.990 08:05:30 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.990 08:05:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.990 ************************************ 00:05:25.990 END TEST alias_rpc 00:05:25.990 ************************************ 00:05:26.250 08:05:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:26.250 08:05:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:26.250 08:05:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.250 08:05:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.250 08:05:31 -- common/autotest_common.sh@10 -- # set +x 00:05:26.250 ************************************ 00:05:26.250 START TEST spdkcli_tcp 00:05:26.250 ************************************ 00:05:26.250 08:05:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:26.250 * Looking for test storage... 00:05:26.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:26.250 08:05:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.250 08:05:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.250 08:05:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.510 08:05:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.510 --rc genhtml_branch_coverage=1 00:05:26.510 --rc genhtml_function_coverage=1 00:05:26.510 --rc genhtml_legend=1 00:05:26.510 --rc geninfo_all_blocks=1 00:05:26.510 --rc geninfo_unexecuted_blocks=1 00:05:26.510 00:05:26.510 ' 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.510 --rc genhtml_branch_coverage=1 00:05:26.510 --rc genhtml_function_coverage=1 00:05:26.510 --rc genhtml_legend=1 00:05:26.510 --rc geninfo_all_blocks=1 00:05:26.510 --rc geninfo_unexecuted_blocks=1 00:05:26.510 00:05:26.510 ' 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.510 --rc genhtml_branch_coverage=1 00:05:26.510 --rc genhtml_function_coverage=1 00:05:26.510 --rc genhtml_legend=1 00:05:26.510 --rc geninfo_all_blocks=1 00:05:26.510 --rc geninfo_unexecuted_blocks=1 00:05:26.510 00:05:26.510 ' 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.510 --rc genhtml_branch_coverage=1 00:05:26.510 --rc genhtml_function_coverage=1 00:05:26.510 --rc genhtml_legend=1 00:05:26.510 --rc geninfo_all_blocks=1 00:05:26.510 --rc geninfo_unexecuted_blocks=1 00:05:26.510 00:05:26.510 ' 00:05:26.510 08:05:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:26.510 08:05:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:26.510 08:05:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:26.510 08:05:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:26.510 08:05:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:26.510 08:05:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:26.510 08:05:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.510 08:05:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58769 00:05:26.510 08:05:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:26.510 08:05:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58769 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58769 ']' 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.510 08:05:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.510 [2024-11-17 08:05:31.423150] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:26.510 [2024-11-17 08:05:31.423311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58769 ] 00:05:26.770 [2024-11-17 08:05:31.612914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.770 [2024-11-17 08:05:31.757520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.770 [2024-11-17 08:05:31.757555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.339 [2024-11-17 08:05:32.062255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.908 08:05:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.908 08:05:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:27.908 08:05:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58791 00:05:27.908 08:05:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:27.908 08:05:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:28.168 [ 00:05:28.168 "bdev_malloc_delete", 00:05:28.168 "bdev_malloc_create", 00:05:28.168 "bdev_null_resize", 00:05:28.168 "bdev_null_delete", 00:05:28.168 "bdev_null_create", 00:05:28.168 "bdev_nvme_cuse_unregister", 00:05:28.168 "bdev_nvme_cuse_register", 00:05:28.168 "bdev_opal_new_user", 00:05:28.168 "bdev_opal_set_lock_state", 00:05:28.168 "bdev_opal_delete", 00:05:28.168 "bdev_opal_get_info", 00:05:28.168 "bdev_opal_create", 00:05:28.168 "bdev_nvme_opal_revert", 00:05:28.168 "bdev_nvme_opal_init", 00:05:28.168 "bdev_nvme_send_cmd", 00:05:28.168 "bdev_nvme_set_keys", 00:05:28.168 "bdev_nvme_get_path_iostat", 00:05:28.168 "bdev_nvme_get_mdns_discovery_info", 00:05:28.168 "bdev_nvme_stop_mdns_discovery", 00:05:28.168 "bdev_nvme_start_mdns_discovery", 00:05:28.168 "bdev_nvme_set_multipath_policy", 00:05:28.168 "bdev_nvme_set_preferred_path", 00:05:28.168 "bdev_nvme_get_io_paths", 00:05:28.168 "bdev_nvme_remove_error_injection", 00:05:28.168 "bdev_nvme_add_error_injection", 00:05:28.168 "bdev_nvme_get_discovery_info", 00:05:28.168 "bdev_nvme_stop_discovery", 00:05:28.168 "bdev_nvme_start_discovery", 00:05:28.168 "bdev_nvme_get_controller_health_info", 00:05:28.169 "bdev_nvme_disable_controller", 00:05:28.169 "bdev_nvme_enable_controller", 00:05:28.169 "bdev_nvme_reset_controller", 00:05:28.169 "bdev_nvme_get_transport_statistics", 00:05:28.169 "bdev_nvme_apply_firmware", 00:05:28.169 "bdev_nvme_detach_controller", 00:05:28.169 "bdev_nvme_get_controllers", 00:05:28.169 "bdev_nvme_attach_controller", 00:05:28.169 "bdev_nvme_set_hotplug", 00:05:28.169 "bdev_nvme_set_options", 00:05:28.169 "bdev_passthru_delete", 00:05:28.169 "bdev_passthru_create", 00:05:28.169 "bdev_lvol_set_parent_bdev", 00:05:28.169 "bdev_lvol_set_parent", 00:05:28.169 "bdev_lvol_check_shallow_copy", 00:05:28.169 "bdev_lvol_start_shallow_copy", 00:05:28.169 "bdev_lvol_grow_lvstore", 00:05:28.169 "bdev_lvol_get_lvols", 00:05:28.169 "bdev_lvol_get_lvstores", 00:05:28.169 "bdev_lvol_delete", 00:05:28.169 "bdev_lvol_set_read_only", 00:05:28.169 "bdev_lvol_resize", 00:05:28.169 "bdev_lvol_decouple_parent", 00:05:28.169 "bdev_lvol_inflate", 00:05:28.169 "bdev_lvol_rename", 00:05:28.169 "bdev_lvol_clone_bdev", 00:05:28.169 "bdev_lvol_clone", 00:05:28.169 "bdev_lvol_snapshot", 00:05:28.169 "bdev_lvol_create", 00:05:28.169 "bdev_lvol_delete_lvstore", 00:05:28.169 "bdev_lvol_rename_lvstore", 00:05:28.169 "bdev_lvol_create_lvstore", 00:05:28.169 "bdev_raid_set_options", 00:05:28.169 "bdev_raid_remove_base_bdev", 00:05:28.169 "bdev_raid_add_base_bdev", 00:05:28.169 "bdev_raid_delete", 00:05:28.169 "bdev_raid_create", 00:05:28.169 "bdev_raid_get_bdevs", 00:05:28.169 "bdev_error_inject_error", 00:05:28.169 "bdev_error_delete", 00:05:28.169 "bdev_error_create", 00:05:28.169 "bdev_split_delete", 00:05:28.169 "bdev_split_create", 00:05:28.169 "bdev_delay_delete", 00:05:28.169 "bdev_delay_create", 00:05:28.169 "bdev_delay_update_latency", 00:05:28.169 "bdev_zone_block_delete", 00:05:28.169 "bdev_zone_block_create", 00:05:28.169 "blobfs_create", 00:05:28.169 "blobfs_detect", 00:05:28.169 "blobfs_set_cache_size", 00:05:28.169 "bdev_aio_delete", 00:05:28.169 "bdev_aio_rescan", 00:05:28.169 "bdev_aio_create", 00:05:28.169 "bdev_ftl_set_property", 00:05:28.169 "bdev_ftl_get_properties", 00:05:28.169 "bdev_ftl_get_stats", 00:05:28.169 "bdev_ftl_unmap", 00:05:28.169 "bdev_ftl_unload", 00:05:28.169 "bdev_ftl_delete", 00:05:28.169 "bdev_ftl_load", 00:05:28.169 "bdev_ftl_create", 00:05:28.169 "bdev_virtio_attach_controller", 00:05:28.169 "bdev_virtio_scsi_get_devices", 00:05:28.169 "bdev_virtio_detach_controller", 00:05:28.169 "bdev_virtio_blk_set_hotplug", 00:05:28.169 "bdev_iscsi_delete", 00:05:28.169 "bdev_iscsi_create", 00:05:28.169 "bdev_iscsi_set_options", 00:05:28.169 "bdev_uring_delete", 00:05:28.169 "bdev_uring_rescan", 00:05:28.169 "bdev_uring_create", 00:05:28.169 "accel_error_inject_error", 00:05:28.169 "ioat_scan_accel_module", 00:05:28.169 "dsa_scan_accel_module", 00:05:28.169 "iaa_scan_accel_module", 00:05:28.169 "vfu_virtio_create_fs_endpoint", 00:05:28.169 "vfu_virtio_create_scsi_endpoint", 00:05:28.169 "vfu_virtio_scsi_remove_target", 00:05:28.169 "vfu_virtio_scsi_add_target", 00:05:28.169 "vfu_virtio_create_blk_endpoint", 00:05:28.169 "vfu_virtio_delete_endpoint", 00:05:28.169 "keyring_file_remove_key", 00:05:28.169 "keyring_file_add_key", 00:05:28.169 "keyring_linux_set_options", 00:05:28.169 "fsdev_aio_delete", 00:05:28.169 "fsdev_aio_create", 00:05:28.169 "iscsi_get_histogram", 00:05:28.169 "iscsi_enable_histogram", 00:05:28.169 "iscsi_set_options", 00:05:28.169 "iscsi_get_auth_groups", 00:05:28.169 "iscsi_auth_group_remove_secret", 00:05:28.169 "iscsi_auth_group_add_secret", 00:05:28.169 "iscsi_delete_auth_group", 00:05:28.169 "iscsi_create_auth_group", 00:05:28.169 "iscsi_set_discovery_auth", 00:05:28.169 "iscsi_get_options", 00:05:28.169 "iscsi_target_node_request_logout", 00:05:28.169 "iscsi_target_node_set_redirect", 00:05:28.169 "iscsi_target_node_set_auth", 00:05:28.169 "iscsi_target_node_add_lun", 00:05:28.169 "iscsi_get_stats", 00:05:28.169 "iscsi_get_connections", 00:05:28.169 "iscsi_portal_group_set_auth", 00:05:28.169 "iscsi_start_portal_group", 00:05:28.169 "iscsi_delete_portal_group", 00:05:28.169 "iscsi_create_portal_group", 00:05:28.169 "iscsi_get_portal_groups", 00:05:28.169 "iscsi_delete_target_node", 00:05:28.169 "iscsi_target_node_remove_pg_ig_maps", 00:05:28.169 "iscsi_target_node_add_pg_ig_maps", 00:05:28.169 "iscsi_create_target_node", 00:05:28.169 "iscsi_get_target_nodes", 00:05:28.169 "iscsi_delete_initiator_group", 00:05:28.169 "iscsi_initiator_group_remove_initiators", 00:05:28.169 "iscsi_initiator_group_add_initiators", 00:05:28.169 "iscsi_create_initiator_group", 00:05:28.169 "iscsi_get_initiator_groups", 00:05:28.169 "nvmf_set_crdt", 00:05:28.169 "nvmf_set_config", 00:05:28.169 "nvmf_set_max_subsystems", 00:05:28.169 "nvmf_stop_mdns_prr", 00:05:28.169 "nvmf_publish_mdns_prr", 00:05:28.169 "nvmf_subsystem_get_listeners", 00:05:28.169 "nvmf_subsystem_get_qpairs", 00:05:28.169 "nvmf_subsystem_get_controllers", 00:05:28.169 "nvmf_get_stats", 00:05:28.169 "nvmf_get_transports", 00:05:28.169 "nvmf_create_transport", 00:05:28.169 "nvmf_get_targets", 00:05:28.169 "nvmf_delete_target", 00:05:28.169 "nvmf_create_target", 00:05:28.169 "nvmf_subsystem_allow_any_host", 00:05:28.169 "nvmf_subsystem_set_keys", 00:05:28.169 "nvmf_subsystem_remove_host", 00:05:28.169 "nvmf_subsystem_add_host", 00:05:28.169 "nvmf_ns_remove_host", 00:05:28.169 "nvmf_ns_add_host", 00:05:28.169 "nvmf_subsystem_remove_ns", 00:05:28.169 "nvmf_subsystem_set_ns_ana_group", 00:05:28.169 "nvmf_subsystem_add_ns", 00:05:28.169 "nvmf_subsystem_listener_set_ana_state", 00:05:28.169 "nvmf_discovery_get_referrals", 00:05:28.169 "nvmf_discovery_remove_referral", 00:05:28.169 "nvmf_discovery_add_referral", 00:05:28.169 "nvmf_subsystem_remove_listener", 00:05:28.169 "nvmf_subsystem_add_listener", 00:05:28.169 "nvmf_delete_subsystem", 00:05:28.169 "nvmf_create_subsystem", 00:05:28.169 "nvmf_get_subsystems", 00:05:28.169 "env_dpdk_get_mem_stats", 00:05:28.169 "nbd_get_disks", 00:05:28.169 "nbd_stop_disk", 00:05:28.169 "nbd_start_disk", 00:05:28.169 "ublk_recover_disk", 00:05:28.169 "ublk_get_disks", 00:05:28.169 "ublk_stop_disk", 00:05:28.169 "ublk_start_disk", 00:05:28.169 "ublk_destroy_target", 00:05:28.169 "ublk_create_target", 00:05:28.169 "virtio_blk_create_transport", 00:05:28.169 "virtio_blk_get_transports", 00:05:28.169 "vhost_controller_set_coalescing", 00:05:28.169 "vhost_get_controllers", 00:05:28.169 "vhost_delete_controller", 00:05:28.169 "vhost_create_blk_controller", 00:05:28.169 "vhost_scsi_controller_remove_target", 00:05:28.169 "vhost_scsi_controller_add_target", 00:05:28.169 "vhost_start_scsi_controller", 00:05:28.169 "vhost_create_scsi_controller", 00:05:28.169 "thread_set_cpumask", 00:05:28.169 "scheduler_set_options", 00:05:28.169 "framework_get_governor", 00:05:28.169 "framework_get_scheduler", 00:05:28.169 "framework_set_scheduler", 00:05:28.169 "framework_get_reactors", 00:05:28.169 "thread_get_io_channels", 00:05:28.169 "thread_get_pollers", 00:05:28.169 "thread_get_stats", 00:05:28.169 "framework_monitor_context_switch", 00:05:28.170 "spdk_kill_instance", 00:05:28.170 "log_enable_timestamps", 00:05:28.170 "log_get_flags", 00:05:28.170 "log_clear_flag", 00:05:28.170 "log_set_flag", 00:05:28.170 "log_get_level", 00:05:28.170 "log_set_level", 00:05:28.170 "log_get_print_level", 00:05:28.170 "log_set_print_level", 00:05:28.170 "framework_enable_cpumask_locks", 00:05:28.170 "framework_disable_cpumask_locks", 00:05:28.170 "framework_wait_init", 00:05:28.170 "framework_start_init", 00:05:28.170 "scsi_get_devices", 00:05:28.170 "bdev_get_histogram", 00:05:28.170 "bdev_enable_histogram", 00:05:28.170 "bdev_set_qos_limit", 00:05:28.170 "bdev_set_qd_sampling_period", 00:05:28.170 "bdev_get_bdevs", 00:05:28.170 "bdev_reset_iostat", 00:05:28.170 "bdev_get_iostat", 00:05:28.170 "bdev_examine", 00:05:28.170 "bdev_wait_for_examine", 00:05:28.170 "bdev_set_options", 00:05:28.170 "accel_get_stats", 00:05:28.170 "accel_set_options", 00:05:28.170 "accel_set_driver", 00:05:28.170 "accel_crypto_key_destroy", 00:05:28.170 "accel_crypto_keys_get", 00:05:28.170 "accel_crypto_key_create", 00:05:28.170 "accel_assign_opc", 00:05:28.170 "accel_get_module_info", 00:05:28.170 "accel_get_opc_assignments", 00:05:28.170 "vmd_rescan", 00:05:28.170 "vmd_remove_device", 00:05:28.170 "vmd_enable", 00:05:28.170 "sock_get_default_impl", 00:05:28.170 "sock_set_default_impl", 00:05:28.170 "sock_impl_set_options", 00:05:28.170 "sock_impl_get_options", 00:05:28.170 "iobuf_get_stats", 00:05:28.170 "iobuf_set_options", 00:05:28.170 "keyring_get_keys", 00:05:28.170 "vfu_tgt_set_base_path", 00:05:28.170 "framework_get_pci_devices", 00:05:28.170 "framework_get_config", 00:05:28.170 "framework_get_subsystems", 00:05:28.170 "fsdev_set_opts", 00:05:28.170 "fsdev_get_opts", 00:05:28.170 "trace_get_info", 00:05:28.170 "trace_get_tpoint_group_mask", 00:05:28.170 "trace_disable_tpoint_group", 00:05:28.170 "trace_enable_tpoint_group", 00:05:28.170 "trace_clear_tpoint_mask", 00:05:28.170 "trace_set_tpoint_mask", 00:05:28.170 "notify_get_notifications", 00:05:28.170 "notify_get_types", 00:05:28.170 "spdk_get_version", 00:05:28.170 "rpc_get_methods" 00:05:28.170 ] 00:05:28.170 08:05:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:28.170 08:05:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:28.170 08:05:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.170 08:05:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:28.170 08:05:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58769 00:05:28.170 08:05:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58769 ']' 00:05:28.170 08:05:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58769 00:05:28.170 08:05:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:28.170 08:05:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.170 08:05:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58769 00:05:28.170 killing process with pid 58769 00:05:28.170 08:05:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.170 08:05:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.170 08:05:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58769' 00:05:28.170 08:05:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58769 00:05:28.170 08:05:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58769 00:05:30.707 00:05:30.707 real 0m4.643s 00:05:30.707 user 0m8.029s 00:05:30.707 sys 0m0.878s 00:05:30.707 08:05:35 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.707 08:05:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.707 ************************************ 00:05:30.707 END TEST spdkcli_tcp 00:05:30.707 ************************************ 00:05:30.966 08:05:35 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.966 08:05:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.966 08:05:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.966 08:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:30.966 ************************************ 00:05:30.966 START TEST dpdk_mem_utility 00:05:30.966 ************************************ 00:05:30.966 08:05:35 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.966 * Looking for test storage... 00:05:30.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:30.966 08:05:35 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.966 08:05:35 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.966 08:05:35 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.226 08:05:35 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.226 08:05:35 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.227 08:05:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:31.227 08:05:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.227 08:05:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:31.227 08:05:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:31.227 08:05:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.227 08:05:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:31.227 08:05:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.227 08:05:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.227 08:05:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.227 08:05:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:31.227 08:05:36 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.227 08:05:36 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.227 --rc genhtml_branch_coverage=1 00:05:31.227 --rc genhtml_function_coverage=1 00:05:31.227 --rc genhtml_legend=1 00:05:31.227 --rc geninfo_all_blocks=1 00:05:31.227 --rc geninfo_unexecuted_blocks=1 00:05:31.227 00:05:31.227 ' 00:05:31.227 08:05:36 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.227 --rc genhtml_branch_coverage=1 00:05:31.227 --rc genhtml_function_coverage=1 00:05:31.227 --rc genhtml_legend=1 00:05:31.227 --rc geninfo_all_blocks=1 00:05:31.227 --rc geninfo_unexecuted_blocks=1 00:05:31.227 00:05:31.227 ' 00:05:31.227 08:05:36 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.227 --rc genhtml_branch_coverage=1 00:05:31.227 --rc genhtml_function_coverage=1 00:05:31.227 --rc genhtml_legend=1 00:05:31.227 --rc geninfo_all_blocks=1 00:05:31.227 --rc geninfo_unexecuted_blocks=1 00:05:31.227 00:05:31.227 ' 00:05:31.227 08:05:36 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.227 --rc genhtml_branch_coverage=1 00:05:31.227 --rc genhtml_function_coverage=1 00:05:31.227 --rc genhtml_legend=1 00:05:31.227 --rc geninfo_all_blocks=1 00:05:31.227 --rc geninfo_unexecuted_blocks=1 00:05:31.227 00:05:31.227 ' 00:05:31.227 08:05:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:31.227 08:05:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58896 00:05:31.227 08:05:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.227 08:05:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58896 00:05:31.227 08:05:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58896 ']' 00:05:31.227 08:05:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.227 08:05:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.227 08:05:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.227 08:05:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.227 08:05:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.227 [2024-11-17 08:05:36.130343] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:31.227 [2024-11-17 08:05:36.130946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58896 ] 00:05:31.486 [2024-11-17 08:05:36.319286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.486 [2024-11-17 08:05:36.455378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.746 [2024-11-17 08:05:36.741502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.686 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.686 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:32.686 08:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:32.686 08:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:32.686 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.686 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.686 { 00:05:32.686 "filename": "/tmp/spdk_mem_dump.txt" 00:05:32.686 } 00:05:32.686 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.686 08:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:32.686 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:32.686 1 heaps totaling size 816.000000 MiB 00:05:32.686 size: 816.000000 MiB heap id: 0 00:05:32.686 end heaps---------- 00:05:32.686 9 mempools totaling size 595.772034 MiB 00:05:32.686 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:32.686 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:32.686 size: 92.545471 MiB name: bdev_io_58896 00:05:32.686 size: 50.003479 MiB name: msgpool_58896 00:05:32.686 size: 36.509338 MiB name: fsdev_io_58896 00:05:32.686 size: 21.763794 MiB name: PDU_Pool 00:05:32.686 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:32.686 size: 4.133484 MiB name: evtpool_58896 00:05:32.686 size: 0.026123 MiB name: Session_Pool 00:05:32.686 end mempools------- 00:05:32.686 6 memzones totaling size 4.142822 MiB 00:05:32.686 size: 1.000366 MiB name: RG_ring_0_58896 00:05:32.686 size: 1.000366 MiB name: RG_ring_1_58896 00:05:32.686 size: 1.000366 MiB name: RG_ring_4_58896 00:05:32.686 size: 1.000366 MiB name: RG_ring_5_58896 00:05:32.686 size: 0.125366 MiB name: RG_ring_2_58896 00:05:32.686 size: 0.015991 MiB name: RG_ring_3_58896 00:05:32.686 end memzones------- 00:05:32.686 08:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:32.686 heap id: 0 total size: 816.000000 MiB number of busy elements: 306 number of free elements: 18 00:05:32.686 list of free elements. size: 16.793579 MiB 00:05:32.686 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:32.686 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:32.686 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:32.686 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:32.686 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:32.686 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:32.686 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:32.686 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:32.686 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:32.686 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:32.686 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:32.686 element at address: 0x20001ac00000 with size: 0.563904 MiB 00:05:32.687 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:32.687 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:32.687 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:32.687 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:32.687 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:32.687 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:32.687 list of standard malloc elements. size: 199.285522 MiB 00:05:32.687 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:32.687 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:32.687 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:32.687 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:32.687 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:32.687 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:32.687 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:32.687 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:32.687 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:32.687 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:32.687 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:32.687 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:32.687 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:32.687 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:32.687 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:32.688 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:32.688 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:32.688 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:32.689 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:32.689 list of memzone associated elements. size: 599.920898 MiB 00:05:32.689 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:32.689 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:32.689 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:32.689 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:32.689 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:32.689 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58896_0 00:05:32.689 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:32.689 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58896_0 00:05:32.689 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:32.689 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58896_0 00:05:32.689 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:32.689 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:32.689 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:32.689 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:32.689 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:32.689 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58896_0 00:05:32.689 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:32.689 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58896 00:05:32.689 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:32.689 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58896 00:05:32.689 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:32.689 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:32.689 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:32.689 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:32.689 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:32.689 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:32.689 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:32.689 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:32.689 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:32.689 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58896 00:05:32.689 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:32.689 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58896 00:05:32.689 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:32.689 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58896 00:05:32.689 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:32.689 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58896 00:05:32.689 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:32.689 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58896 00:05:32.689 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:32.689 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58896 00:05:32.689 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:32.689 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:32.689 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:32.689 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:32.689 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:32.689 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:32.689 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:32.689 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58896 00:05:32.689 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:32.689 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58896 00:05:32.689 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:32.689 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:32.689 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:32.689 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:32.689 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:32.689 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58896 00:05:32.689 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:32.689 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:32.689 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:32.689 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58896 00:05:32.689 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:32.689 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58896 00:05:32.689 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:32.689 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58896 00:05:32.689 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:32.689 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:32.689 08:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:32.689 08:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58896 00:05:32.689 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58896 ']' 00:05:32.689 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58896 00:05:32.689 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:32.689 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.689 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58896 00:05:32.689 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.689 killing process with pid 58896 00:05:32.689 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.689 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58896' 00:05:32.689 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58896 00:05:32.689 08:05:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58896 00:05:35.228 00:05:35.228 real 0m4.430s 00:05:35.228 user 0m4.107s 00:05:35.228 sys 0m0.809s 00:05:35.228 08:05:40 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.228 ************************************ 00:05:35.228 END TEST dpdk_mem_utility 00:05:35.228 ************************************ 00:05:35.228 08:05:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.487 08:05:40 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:35.487 08:05:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.487 08:05:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.487 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:05:35.487 ************************************ 00:05:35.487 START TEST event 00:05:35.487 ************************************ 00:05:35.487 08:05:40 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:35.487 * Looking for test storage... 00:05:35.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:35.487 08:05:40 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.487 08:05:40 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.487 08:05:40 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.487 08:05:40 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.487 08:05:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.487 08:05:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.487 08:05:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.487 08:05:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.487 08:05:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.487 08:05:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.487 08:05:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.487 08:05:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.487 08:05:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.487 08:05:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.487 08:05:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.487 08:05:40 event -- scripts/common.sh@344 -- # case "$op" in 00:05:35.487 08:05:40 event -- scripts/common.sh@345 -- # : 1 00:05:35.487 08:05:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.487 08:05:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.487 08:05:40 event -- scripts/common.sh@365 -- # decimal 1 00:05:35.487 08:05:40 event -- scripts/common.sh@353 -- # local d=1 00:05:35.488 08:05:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.488 08:05:40 event -- scripts/common.sh@355 -- # echo 1 00:05:35.488 08:05:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.747 08:05:40 event -- scripts/common.sh@366 -- # decimal 2 00:05:35.747 08:05:40 event -- scripts/common.sh@353 -- # local d=2 00:05:35.747 08:05:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.747 08:05:40 event -- scripts/common.sh@355 -- # echo 2 00:05:35.747 08:05:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.747 08:05:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.747 08:05:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.747 08:05:40 event -- scripts/common.sh@368 -- # return 0 00:05:35.747 08:05:40 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.747 08:05:40 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.747 --rc genhtml_branch_coverage=1 00:05:35.747 --rc genhtml_function_coverage=1 00:05:35.747 --rc genhtml_legend=1 00:05:35.747 --rc geninfo_all_blocks=1 00:05:35.747 --rc geninfo_unexecuted_blocks=1 00:05:35.747 00:05:35.747 ' 00:05:35.747 08:05:40 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.747 --rc genhtml_branch_coverage=1 00:05:35.747 --rc genhtml_function_coverage=1 00:05:35.747 --rc genhtml_legend=1 00:05:35.747 --rc geninfo_all_blocks=1 00:05:35.747 --rc geninfo_unexecuted_blocks=1 00:05:35.747 00:05:35.747 ' 00:05:35.747 08:05:40 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.747 --rc genhtml_branch_coverage=1 00:05:35.747 --rc genhtml_function_coverage=1 00:05:35.747 --rc genhtml_legend=1 00:05:35.747 --rc geninfo_all_blocks=1 00:05:35.747 --rc geninfo_unexecuted_blocks=1 00:05:35.747 00:05:35.747 ' 00:05:35.747 08:05:40 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.747 --rc genhtml_branch_coverage=1 00:05:35.747 --rc genhtml_function_coverage=1 00:05:35.747 --rc genhtml_legend=1 00:05:35.747 --rc geninfo_all_blocks=1 00:05:35.747 --rc geninfo_unexecuted_blocks=1 00:05:35.747 00:05:35.747 ' 00:05:35.747 08:05:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:35.747 08:05:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:35.747 08:05:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.747 08:05:40 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:35.747 08:05:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.747 08:05:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.747 ************************************ 00:05:35.747 START TEST event_perf 00:05:35.747 ************************************ 00:05:35.747 08:05:40 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.747 Running I/O for 1 seconds...[2024-11-17 08:05:40.571113] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:35.747 [2024-11-17 08:05:40.571226] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59010 ] 00:05:36.062 [2024-11-17 08:05:40.758134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.062 [2024-11-17 08:05:40.906270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.062 [2024-11-17 08:05:40.906484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.062 [2024-11-17 08:05:40.906595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.062 Running I/O for 1 seconds...[2024-11-17 08:05:40.906628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.439 00:05:37.439 lcore 0: 202408 00:05:37.439 lcore 1: 202405 00:05:37.439 lcore 2: 202405 00:05:37.439 lcore 3: 202406 00:05:37.439 done. 00:05:37.439 ************************************ 00:05:37.439 END TEST event_perf 00:05:37.439 00:05:37.439 real 0m1.638s 00:05:37.439 user 0m4.364s 00:05:37.439 sys 0m0.151s 00:05:37.439 08:05:42 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.439 08:05:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.439 ************************************ 00:05:37.439 08:05:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:37.439 08:05:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:37.439 08:05:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.439 08:05:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.439 ************************************ 00:05:37.439 START TEST event_reactor 00:05:37.439 ************************************ 00:05:37.439 08:05:42 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:37.439 [2024-11-17 08:05:42.286187] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:37.439 [2024-11-17 08:05:42.286489] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59049 ] 00:05:37.698 [2024-11-17 08:05:42.474698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.698 [2024-11-17 08:05:42.612752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.076 test_start 00:05:39.076 oneshot 00:05:39.076 tick 100 00:05:39.076 tick 100 00:05:39.076 tick 250 00:05:39.076 tick 100 00:05:39.076 tick 100 00:05:39.076 tick 100 00:05:39.076 tick 250 00:05:39.076 tick 500 00:05:39.076 tick 100 00:05:39.076 tick 100 00:05:39.076 tick 250 00:05:39.076 tick 100 00:05:39.076 tick 100 00:05:39.076 test_end 00:05:39.076 00:05:39.076 real 0m1.610s 00:05:39.076 user 0m1.368s 00:05:39.076 sys 0m0.134s 00:05:39.076 ************************************ 00:05:39.076 END TEST event_reactor 00:05:39.076 ************************************ 00:05:39.076 08:05:43 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.076 08:05:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:39.076 08:05:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.076 08:05:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:39.076 08:05:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.076 08:05:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.076 ************************************ 00:05:39.076 START TEST event_reactor_perf 00:05:39.076 ************************************ 00:05:39.076 08:05:43 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.076 [2024-11-17 08:05:43.969083] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:39.076 [2024-11-17 08:05:43.969207] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59086 ] 00:05:39.333 [2024-11-17 08:05:44.156081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.333 [2024-11-17 08:05:44.289918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.709 test_start 00:05:40.709 test_end 00:05:40.709 Performance: 395658 events per second 00:05:40.709 00:05:40.709 real 0m1.610s 00:05:40.709 user 0m1.372s 00:05:40.709 sys 0m0.130s 00:05:40.709 ************************************ 00:05:40.709 END TEST event_reactor_perf 00:05:40.709 ************************************ 00:05:40.709 08:05:45 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.709 08:05:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.709 08:05:45 event -- event/event.sh@49 -- # uname -s 00:05:40.709 08:05:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:40.709 08:05:45 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:40.709 08:05:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.709 08:05:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.709 08:05:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.709 ************************************ 00:05:40.709 START TEST event_scheduler 00:05:40.709 ************************************ 00:05:40.709 08:05:45 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:40.969 * Looking for test storage... 00:05:40.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:40.969 08:05:45 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.969 08:05:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.969 08:05:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.969 08:05:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.969 08:05:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.970 08:05:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:40.970 08:05:45 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.970 08:05:45 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.970 --rc genhtml_branch_coverage=1 00:05:40.970 --rc genhtml_function_coverage=1 00:05:40.970 --rc genhtml_legend=1 00:05:40.970 --rc geninfo_all_blocks=1 00:05:40.970 --rc geninfo_unexecuted_blocks=1 00:05:40.970 00:05:40.970 ' 00:05:40.970 08:05:45 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.970 --rc genhtml_branch_coverage=1 00:05:40.970 --rc genhtml_function_coverage=1 00:05:40.970 --rc genhtml_legend=1 00:05:40.970 --rc geninfo_all_blocks=1 00:05:40.970 --rc geninfo_unexecuted_blocks=1 00:05:40.970 00:05:40.970 ' 00:05:40.970 08:05:45 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.970 --rc genhtml_branch_coverage=1 00:05:40.970 --rc genhtml_function_coverage=1 00:05:40.970 --rc genhtml_legend=1 00:05:40.970 --rc geninfo_all_blocks=1 00:05:40.970 --rc geninfo_unexecuted_blocks=1 00:05:40.970 00:05:40.970 ' 00:05:40.970 08:05:45 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.970 --rc genhtml_branch_coverage=1 00:05:40.970 --rc genhtml_function_coverage=1 00:05:40.970 --rc genhtml_legend=1 00:05:40.970 --rc geninfo_all_blocks=1 00:05:40.970 --rc geninfo_unexecuted_blocks=1 00:05:40.970 00:05:40.970 ' 00:05:40.970 08:05:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:40.970 08:05:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59162 00:05:40.970 08:05:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:40.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.970 08:05:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.970 08:05:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59162 00:05:40.970 08:05:45 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59162 ']' 00:05:40.970 08:05:45 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.970 08:05:45 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.970 08:05:45 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.970 08:05:45 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.970 08:05:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.970 [2024-11-17 08:05:45.940627] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:40.970 [2024-11-17 08:05:45.940997] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59162 ] 00:05:41.229 [2024-11-17 08:05:46.128039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.488 [2024-11-17 08:05:46.265607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.488 [2024-11-17 08:05:46.265800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.488 [2024-11-17 08:05:46.266000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.488 [2024-11-17 08:05:46.266653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.055 08:05:46 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.055 08:05:46 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:42.055 08:05:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:42.055 08:05:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.055 08:05:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.055 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.055 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.055 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.055 POWER: Cannot set governor of lcore 0 to performance 00:05:42.055 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.055 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.055 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.055 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.055 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:42.055 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:42.055 POWER: Unable to set Power Management Environment for lcore 0 00:05:42.055 [2024-11-17 08:05:46.767181] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:42.055 [2024-11-17 08:05:46.767206] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:42.055 [2024-11-17 08:05:46.767220] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:42.055 [2024-11-17 08:05:46.767255] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:42.055 [2024-11-17 08:05:46.767266] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:42.055 [2024-11-17 08:05:46.767280] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:42.055 08:05:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.055 08:05:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:42.055 08:05:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.055 08:05:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.056 [2024-11-17 08:05:47.007849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.315 [2024-11-17 08:05:47.145613] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:42.315 08:05:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.315 08:05:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:42.315 08:05:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.315 08:05:47 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 ************************************ 00:05:42.315 START TEST scheduler_create_thread 00:05:42.315 ************************************ 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 2 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 3 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 4 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 5 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 6 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 7 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 8 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 9 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 10 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.315 08:05:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.217 08:05:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.217 08:05:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:44.217 08:05:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:44.217 08:05:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.217 08:05:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.782 08:05:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.782 00:05:44.782 real 0m2.617s 00:05:44.782 user 0m0.020s 00:05:44.782 sys 0m0.012s 00:05:44.782 ************************************ 00:05:44.782 END TEST scheduler_create_thread 00:05:44.782 ************************************ 00:05:44.782 08:05:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.782 08:05:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.041 08:05:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:45.041 08:05:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59162 00:05:45.041 08:05:49 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59162 ']' 00:05:45.041 08:05:49 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59162 00:05:45.041 08:05:49 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:45.041 08:05:49 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.041 08:05:49 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59162 00:05:45.041 killing process with pid 59162 00:05:45.041 08:05:49 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:45.041 08:05:49 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:45.041 08:05:49 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59162' 00:05:45.041 08:05:49 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59162 00:05:45.041 08:05:49 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59162 00:05:45.300 [2024-11-17 08:05:50.258890] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:46.678 00:05:46.678 real 0m5.901s 00:05:46.678 user 0m9.741s 00:05:46.678 sys 0m0.658s 00:05:46.678 08:05:51 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.678 08:05:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.678 ************************************ 00:05:46.678 END TEST event_scheduler 00:05:46.678 ************************************ 00:05:46.678 08:05:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:46.678 08:05:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:46.678 08:05:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.678 08:05:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.678 08:05:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.678 ************************************ 00:05:46.678 START TEST app_repeat 00:05:46.678 ************************************ 00:05:46.678 08:05:51 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59268 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59268' 00:05:46.678 Process app_repeat pid: 59268 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:46.678 spdk_app_start Round 0 00:05:46.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.678 08:05:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59268 /var/tmp/spdk-nbd.sock 00:05:46.678 08:05:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59268 ']' 00:05:46.678 08:05:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.678 08:05:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.678 08:05:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.678 08:05:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.678 08:05:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.678 [2024-11-17 08:05:51.664168] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:46.678 [2024-11-17 08:05:51.664297] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59268 ] 00:05:46.937 [2024-11-17 08:05:51.853215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.196 [2024-11-17 08:05:51.990833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.196 [2024-11-17 08:05:51.990867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.455 [2024-11-17 08:05:52.234997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.714 08:05:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.714 08:05:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:47.714 08:05:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.973 Malloc0 00:05:47.973 08:05:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.232 Malloc1 00:05:48.232 08:05:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.232 08:05:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.492 /dev/nbd0 00:05:48.492 08:05:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.492 08:05:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.492 1+0 records in 00:05:48.492 1+0 records out 00:05:48.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359123 s, 11.4 MB/s 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.492 08:05:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.492 08:05:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.492 08:05:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.492 08:05:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.750 /dev/nbd1 00:05:48.751 08:05:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.751 08:05:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.751 1+0 records in 00:05:48.751 1+0 records out 00:05:48.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386316 s, 10.6 MB/s 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.751 08:05:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.751 08:05:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.751 08:05:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.751 08:05:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.751 08:05:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.751 08:05:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.012 { 00:05:49.012 "nbd_device": "/dev/nbd0", 00:05:49.012 "bdev_name": "Malloc0" 00:05:49.012 }, 00:05:49.012 { 00:05:49.012 "nbd_device": "/dev/nbd1", 00:05:49.012 "bdev_name": "Malloc1" 00:05:49.012 } 00:05:49.012 ]' 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.012 { 00:05:49.012 "nbd_device": "/dev/nbd0", 00:05:49.012 "bdev_name": "Malloc0" 00:05:49.012 }, 00:05:49.012 { 00:05:49.012 "nbd_device": "/dev/nbd1", 00:05:49.012 "bdev_name": "Malloc1" 00:05:49.012 } 00:05:49.012 ]' 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.012 /dev/nbd1' 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.012 /dev/nbd1' 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.012 256+0 records in 00:05:49.012 256+0 records out 00:05:49.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122786 s, 85.4 MB/s 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.012 08:05:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.013 256+0 records in 00:05:49.013 256+0 records out 00:05:49.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027862 s, 37.6 MB/s 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.013 256+0 records in 00:05:49.013 256+0 records out 00:05:49.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279726 s, 37.5 MB/s 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.013 08:05:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.013 08:05:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.013 08:05:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.013 08:05:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.013 08:05:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.013 08:05:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.013 08:05:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.013 08:05:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.290 08:05:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.290 08:05:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.290 08:05:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.290 08:05:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.290 08:05:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.290 08:05:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.290 08:05:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.290 08:05:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.290 08:05:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.290 08:05:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.593 08:05:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.593 08:05:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.593 08:05:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.593 08:05:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.593 08:05:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.593 08:05:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.593 08:05:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.593 08:05:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.593 08:05:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.593 08:05:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.593 08:05:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.853 08:05:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.853 08:05:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.853 08:05:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.853 08:05:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.853 08:05:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.853 08:05:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.853 08:05:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.853 08:05:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.853 08:05:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.853 08:05:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.853 08:05:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.853 08:05:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.853 08:05:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.421 08:05:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.358 [2024-11-17 08:05:56.354798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.616 [2024-11-17 08:05:56.476544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.616 [2024-11-17 08:05:56.476560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.875 [2024-11-17 08:05:56.701204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.875 [2024-11-17 08:05:56.701324] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.875 [2024-11-17 08:05:56.701347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.252 spdk_app_start Round 1 00:05:53.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.252 08:05:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.252 08:05:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:53.252 08:05:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59268 /var/tmp/spdk-nbd.sock 00:05:53.253 08:05:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59268 ']' 00:05:53.253 08:05:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.253 08:05:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.253 08:05:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.253 08:05:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.253 08:05:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.511 08:05:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.511 08:05:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.511 08:05:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.770 Malloc0 00:05:53.770 08:05:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.029 Malloc1 00:05:54.029 08:05:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.029 08:05:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.288 /dev/nbd0 00:05:54.288 08:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.288 08:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.288 1+0 records in 00:05:54.288 1+0 records out 00:05:54.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065349 s, 6.3 MB/s 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.288 08:05:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.288 08:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.288 08:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.288 08:05:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.547 /dev/nbd1 00:05:54.547 08:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.547 08:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.547 1+0 records in 00:05:54.547 1+0 records out 00:05:54.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434744 s, 9.4 MB/s 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.547 08:05:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.547 08:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.547 08:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.547 08:05:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.547 08:05:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.547 08:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.806 { 00:05:54.806 "nbd_device": "/dev/nbd0", 00:05:54.806 "bdev_name": "Malloc0" 00:05:54.806 }, 00:05:54.806 { 00:05:54.806 "nbd_device": "/dev/nbd1", 00:05:54.806 "bdev_name": "Malloc1" 00:05:54.806 } 00:05:54.806 ]' 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.806 { 00:05:54.806 "nbd_device": "/dev/nbd0", 00:05:54.806 "bdev_name": "Malloc0" 00:05:54.806 }, 00:05:54.806 { 00:05:54.806 "nbd_device": "/dev/nbd1", 00:05:54.806 "bdev_name": "Malloc1" 00:05:54.806 } 00:05:54.806 ]' 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.806 /dev/nbd1' 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.806 /dev/nbd1' 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.806 256+0 records in 00:05:54.806 256+0 records out 00:05:54.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118333 s, 88.6 MB/s 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.806 08:05:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.806 256+0 records in 00:05:54.807 256+0 records out 00:05:54.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274263 s, 38.2 MB/s 00:05:54.807 08:05:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.807 08:05:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.807 256+0 records in 00:05:54.807 256+0 records out 00:05:54.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349588 s, 30.0 MB/s 00:05:54.807 08:05:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.807 08:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.807 08:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.807 08:05:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.807 08:05:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.807 08:05:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.807 08:05:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.807 08:05:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.807 08:05:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.066 08:05:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.066 08:05:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.066 08:05:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.066 08:05:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.066 08:05:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.066 08:05:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.066 08:05:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.066 08:05:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.066 08:05:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.066 08:05:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.066 08:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.066 08:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.066 08:06:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.066 08:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.066 08:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.066 08:06:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.066 08:06:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.066 08:06:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.066 08:06:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.066 08:06:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.324 08:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.324 08:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.324 08:06:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.324 08:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.324 08:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.324 08:06:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.324 08:06:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.324 08:06:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.324 08:06:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.325 08:06:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.325 08:06:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.584 08:06:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.584 08:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.584 08:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.584 08:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.584 08:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.584 08:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.584 08:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.584 08:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.584 08:06:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.584 08:06:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.584 08:06:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.584 08:06:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.584 08:06:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.152 08:06:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.529 [2024-11-17 08:06:02.242445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.529 [2024-11-17 08:06:02.372063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.529 [2024-11-17 08:06:02.372098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.787 [2024-11-17 08:06:02.600544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.787 [2024-11-17 08:06:02.600679] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.787 [2024-11-17 08:06:02.600696] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.166 08:06:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.166 spdk_app_start Round 2 00:05:59.166 08:06:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:59.166 08:06:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59268 /var/tmp/spdk-nbd.sock 00:05:59.166 08:06:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59268 ']' 00:05:59.166 08:06:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.166 08:06:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.166 08:06:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.166 08:06:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.166 08:06:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.425 08:06:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.425 08:06:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.425 08:06:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.685 Malloc0 00:05:59.685 08:06:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.945 Malloc1 00:05:59.945 08:06:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.945 08:06:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.204 /dev/nbd0 00:06:00.204 08:06:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.204 08:06:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.204 08:06:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.204 08:06:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.204 08:06:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.204 08:06:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.204 08:06:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.204 08:06:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.205 08:06:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.205 08:06:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.205 08:06:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.205 1+0 records in 00:06:00.205 1+0 records out 00:06:00.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275229 s, 14.9 MB/s 00:06:00.205 08:06:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.205 08:06:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.205 08:06:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.205 08:06:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.205 08:06:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.205 08:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.205 08:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.205 08:06:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.464 /dev/nbd1 00:06:00.464 08:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.464 08:06:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.464 1+0 records in 00:06:00.464 1+0 records out 00:06:00.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367812 s, 11.1 MB/s 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.464 08:06:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.464 08:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.464 08:06:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.464 08:06:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.464 08:06:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.464 08:06:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.724 { 00:06:00.724 "nbd_device": "/dev/nbd0", 00:06:00.724 "bdev_name": "Malloc0" 00:06:00.724 }, 00:06:00.724 { 00:06:00.724 "nbd_device": "/dev/nbd1", 00:06:00.724 "bdev_name": "Malloc1" 00:06:00.724 } 00:06:00.724 ]' 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.724 { 00:06:00.724 "nbd_device": "/dev/nbd0", 00:06:00.724 "bdev_name": "Malloc0" 00:06:00.724 }, 00:06:00.724 { 00:06:00.724 "nbd_device": "/dev/nbd1", 00:06:00.724 "bdev_name": "Malloc1" 00:06:00.724 } 00:06:00.724 ]' 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.724 /dev/nbd1' 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.724 /dev/nbd1' 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.724 256+0 records in 00:06:00.724 256+0 records out 00:06:00.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113701 s, 92.2 MB/s 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.724 256+0 records in 00:06:00.724 256+0 records out 00:06:00.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298065 s, 35.2 MB/s 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.724 256+0 records in 00:06:00.724 256+0 records out 00:06:00.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0363519 s, 28.8 MB/s 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.724 08:06:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.725 08:06:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.725 08:06:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.725 08:06:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.725 08:06:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.725 08:06:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.725 08:06:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.725 08:06:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.725 08:06:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.725 08:06:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.725 08:06:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.725 08:06:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.984 08:06:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.984 08:06:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.985 08:06:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.985 08:06:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.985 08:06:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.985 08:06:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.985 08:06:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.985 08:06:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.985 08:06:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.985 08:06:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.244 08:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.244 08:06:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.244 08:06:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.244 08:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.244 08:06:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.244 08:06:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.244 08:06:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.244 08:06:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.244 08:06:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.244 08:06:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.244 08:06:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.503 08:06:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.503 08:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.503 08:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.503 08:06:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.503 08:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.503 08:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.503 08:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.503 08:06:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.503 08:06:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.503 08:06:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.503 08:06:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.503 08:06:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.503 08:06:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.763 08:06:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.142 [2024-11-17 08:06:07.983339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.142 [2024-11-17 08:06:08.104748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.142 [2024-11-17 08:06:08.104775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.402 [2024-11-17 08:06:08.332663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.402 [2024-11-17 08:06:08.332759] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.402 [2024-11-17 08:06:08.332778] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.781 08:06:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59268 /var/tmp/spdk-nbd.sock 00:06:04.781 08:06:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59268 ']' 00:06:04.781 08:06:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.781 08:06:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.781 08:06:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.781 08:06:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.781 08:06:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.040 08:06:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.040 08:06:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:05.040 08:06:09 event.app_repeat -- event/event.sh@39 -- # killprocess 59268 00:06:05.040 08:06:09 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59268 ']' 00:06:05.040 08:06:09 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59268 00:06:05.040 08:06:09 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:05.040 08:06:09 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.040 08:06:09 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59268 00:06:05.040 08:06:10 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.040 08:06:10 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.040 killing process with pid 59268 00:06:05.040 08:06:10 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59268' 00:06:05.040 08:06:10 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59268 00:06:05.040 08:06:10 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59268 00:06:06.420 spdk_app_start is called in Round 0. 00:06:06.420 Shutdown signal received, stop current app iteration 00:06:06.420 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:06.420 spdk_app_start is called in Round 1. 00:06:06.420 Shutdown signal received, stop current app iteration 00:06:06.420 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:06.420 spdk_app_start is called in Round 2. 00:06:06.420 Shutdown signal received, stop current app iteration 00:06:06.420 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:06.420 spdk_app_start is called in Round 3. 00:06:06.420 Shutdown signal received, stop current app iteration 00:06:06.420 08:06:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:06.420 08:06:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:06.420 00:06:06.420 real 0m19.554s 00:06:06.420 user 0m40.823s 00:06:06.420 sys 0m3.605s 00:06:06.420 08:06:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.420 08:06:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.420 ************************************ 00:06:06.420 END TEST app_repeat 00:06:06.420 ************************************ 00:06:06.420 08:06:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:06.420 08:06:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:06.420 08:06:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.420 08:06:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.420 08:06:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.420 ************************************ 00:06:06.420 START TEST cpu_locks 00:06:06.420 ************************************ 00:06:06.420 08:06:11 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:06.420 * Looking for test storage... 00:06:06.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:06.420 08:06:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.420 08:06:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.420 08:06:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.679 08:06:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.679 08:06:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:06.679 08:06:11 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.679 08:06:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.680 --rc genhtml_branch_coverage=1 00:06:06.680 --rc genhtml_function_coverage=1 00:06:06.680 --rc genhtml_legend=1 00:06:06.680 --rc geninfo_all_blocks=1 00:06:06.680 --rc geninfo_unexecuted_blocks=1 00:06:06.680 00:06:06.680 ' 00:06:06.680 08:06:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.680 --rc genhtml_branch_coverage=1 00:06:06.680 --rc genhtml_function_coverage=1 00:06:06.680 --rc genhtml_legend=1 00:06:06.680 --rc geninfo_all_blocks=1 00:06:06.680 --rc geninfo_unexecuted_blocks=1 00:06:06.680 00:06:06.680 ' 00:06:06.680 08:06:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.680 --rc genhtml_branch_coverage=1 00:06:06.680 --rc genhtml_function_coverage=1 00:06:06.680 --rc genhtml_legend=1 00:06:06.680 --rc geninfo_all_blocks=1 00:06:06.680 --rc geninfo_unexecuted_blocks=1 00:06:06.680 00:06:06.680 ' 00:06:06.680 08:06:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.680 --rc genhtml_branch_coverage=1 00:06:06.680 --rc genhtml_function_coverage=1 00:06:06.680 --rc genhtml_legend=1 00:06:06.680 --rc geninfo_all_blocks=1 00:06:06.680 --rc geninfo_unexecuted_blocks=1 00:06:06.680 00:06:06.680 ' 00:06:06.680 08:06:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:06.680 08:06:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:06.680 08:06:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:06.680 08:06:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:06.680 08:06:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.680 08:06:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.680 08:06:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.680 ************************************ 00:06:06.680 START TEST default_locks 00:06:06.680 ************************************ 00:06:06.680 08:06:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:06.680 08:06:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59725 00:06:06.680 08:06:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.680 08:06:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59725 00:06:06.680 08:06:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59725 ']' 00:06:06.680 08:06:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.680 08:06:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.680 08:06:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.680 08:06:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.680 08:06:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.680 [2024-11-17 08:06:11.602250] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:06.680 [2024-11-17 08:06:11.602397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59725 ] 00:06:06.939 [2024-11-17 08:06:11.789391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.939 [2024-11-17 08:06:11.931292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.507 [2024-11-17 08:06:12.229282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.075 08:06:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.076 08:06:12 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:08.076 08:06:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59725 00:06:08.076 08:06:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.076 08:06:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59725 00:06:08.644 08:06:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59725 00:06:08.644 08:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59725 ']' 00:06:08.644 08:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59725 00:06:08.644 08:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:08.644 08:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.644 08:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59725 00:06:08.644 08:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.644 08:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.644 killing process with pid 59725 00:06:08.644 08:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59725' 00:06:08.644 08:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59725 00:06:08.644 08:06:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59725 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59725 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59725 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59725 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59725 ']' 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.183 ERROR: process (pid: 59725) is no longer running 00:06:11.183 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59725) - No such process 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.183 00:06:11.183 real 0m4.549s 00:06:11.183 user 0m4.307s 00:06:11.183 sys 0m0.909s 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.183 ************************************ 00:06:11.183 END TEST default_locks 00:06:11.183 ************************************ 00:06:11.183 08:06:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.183 08:06:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:11.183 08:06:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.183 08:06:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.183 08:06:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.183 ************************************ 00:06:11.183 START TEST default_locks_via_rpc 00:06:11.183 ************************************ 00:06:11.183 08:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:11.183 08:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59806 00:06:11.183 08:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.183 08:06:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59806 00:06:11.183 08:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59806 ']' 00:06:11.183 08:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.183 08:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.184 08:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.184 08:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.184 08:06:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.443 [2024-11-17 08:06:16.222077] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:11.443 [2024-11-17 08:06:16.222310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59806 ] 00:06:11.443 [2024-11-17 08:06:16.393656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.702 [2024-11-17 08:06:16.526887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.962 [2024-11-17 08:06:16.818218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.531 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.532 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:12.532 08:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:12.532 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.532 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.791 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.791 08:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:12.791 08:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.791 08:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.791 08:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.791 08:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.791 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.791 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.791 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.791 08:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59806 00:06:12.791 08:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59806 00:06:12.791 08:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.050 08:06:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59806 00:06:13.050 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59806 ']' 00:06:13.051 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59806 00:06:13.051 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:13.051 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.051 08:06:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59806 00:06:13.051 08:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.051 killing process with pid 59806 00:06:13.051 08:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.051 08:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59806' 00:06:13.051 08:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59806 00:06:13.051 08:06:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59806 00:06:15.630 00:06:15.630 real 0m4.495s 00:06:15.630 user 0m4.224s 00:06:15.630 sys 0m0.899s 00:06:15.630 08:06:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.630 08:06:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.630 ************************************ 00:06:15.630 END TEST default_locks_via_rpc 00:06:15.630 ************************************ 00:06:15.889 08:06:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:15.889 08:06:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.889 08:06:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.889 08:06:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.889 ************************************ 00:06:15.889 START TEST non_locking_app_on_locked_coremask 00:06:15.889 ************************************ 00:06:15.889 08:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:15.889 08:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59889 00:06:15.889 08:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59889 /var/tmp/spdk.sock 00:06:15.889 08:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.889 08:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59889 ']' 00:06:15.889 08:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.889 08:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.889 08:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.889 08:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.889 08:06:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.889 [2024-11-17 08:06:20.794312] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:15.889 [2024-11-17 08:06:20.794453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59889 ] 00:06:16.148 [2024-11-17 08:06:20.977661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.148 [2024-11-17 08:06:21.110631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.408 [2024-11-17 08:06:21.407525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.345 08:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.345 08:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.345 08:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59905 00:06:17.345 08:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:17.345 08:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59905 /var/tmp/spdk2.sock 00:06:17.345 08:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59905 ']' 00:06:17.345 08:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.345 08:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.345 08:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.345 08:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.345 08:06:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.345 [2024-11-17 08:06:22.205208] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:17.345 [2024-11-17 08:06:22.205349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59905 ] 00:06:17.604 [2024-11-17 08:06:22.392572] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.604 [2024-11-17 08:06:22.392634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.864 [2024-11-17 08:06:22.668822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.435 [2024-11-17 08:06:23.266816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.816 08:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.816 08:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.816 08:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59889 00:06:19.816 08:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59889 00:06:19.816 08:06:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.754 08:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59889 00:06:20.754 08:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59889 ']' 00:06:20.754 08:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59889 00:06:20.754 08:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.754 08:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.754 08:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59889 00:06:20.754 08:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.754 killing process with pid 59889 00:06:20.754 08:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.754 08:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59889' 00:06:20.754 08:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59889 00:06:20.754 08:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59889 00:06:26.031 08:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59905 00:06:26.031 08:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59905 ']' 00:06:26.031 08:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59905 00:06:26.031 08:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:26.031 08:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.031 08:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59905 00:06:26.031 killing process with pid 59905 00:06:26.031 08:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.031 08:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.031 08:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59905' 00:06:26.031 08:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59905 00:06:26.031 08:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59905 00:06:28.569 ************************************ 00:06:28.569 END TEST non_locking_app_on_locked_coremask 00:06:28.569 ************************************ 00:06:28.569 00:06:28.569 real 0m12.679s 00:06:28.569 user 0m12.619s 00:06:28.569 sys 0m1.794s 00:06:28.569 08:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.569 08:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.569 08:06:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:28.569 08:06:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.569 08:06:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.569 08:06:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.569 ************************************ 00:06:28.569 START TEST locking_app_on_unlocked_coremask 00:06:28.569 ************************************ 00:06:28.569 08:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:28.569 08:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60064 00:06:28.569 08:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:28.569 08:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60064 /var/tmp/spdk.sock 00:06:28.569 08:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60064 ']' 00:06:28.569 08:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.569 08:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.569 08:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.569 08:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.569 08:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.569 [2024-11-17 08:06:33.555403] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:28.569 [2024-11-17 08:06:33.555769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60064 ] 00:06:28.828 [2024-11-17 08:06:33.744191] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.828 [2024-11-17 08:06:33.744252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.088 [2024-11-17 08:06:33.886717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.348 [2024-11-17 08:06:34.198152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.916 08:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.916 08:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.916 08:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60080 00:06:29.916 08:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.916 08:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60080 /var/tmp/spdk2.sock 00:06:29.916 08:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60080 ']' 00:06:29.916 08:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.916 08:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.916 08:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.916 08:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.916 08:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.175 [2024-11-17 08:06:35.016896] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:30.175 [2024-11-17 08:06:35.017285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60080 ] 00:06:30.435 [2024-11-17 08:06:35.207782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.693 [2024-11-17 08:06:35.504982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.262 [2024-11-17 08:06:36.116937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.639 08:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.639 08:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:32.639 08:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60080 00:06:32.639 08:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60080 00:06:32.639 08:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.577 08:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60064 00:06:33.577 08:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60064 ']' 00:06:33.577 08:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60064 00:06:33.577 08:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.577 08:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.577 08:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60064 00:06:33.577 08:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.577 08:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.577 killing process with pid 60064 00:06:33.577 08:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60064' 00:06:33.577 08:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60064 00:06:33.577 08:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60064 00:06:38.863 08:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60080 00:06:38.863 08:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60080 ']' 00:06:38.863 08:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60080 00:06:38.863 08:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.863 08:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.863 08:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60080 00:06:38.863 killing process with pid 60080 00:06:38.863 08:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.863 08:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.863 08:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60080' 00:06:38.863 08:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60080 00:06:38.863 08:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60080 00:06:41.436 ************************************ 00:06:41.436 END TEST locking_app_on_unlocked_coremask 00:06:41.436 ************************************ 00:06:41.436 00:06:41.436 real 0m12.779s 00:06:41.436 user 0m12.646s 00:06:41.436 sys 0m1.823s 00:06:41.436 08:06:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.436 08:06:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.436 08:06:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:41.436 08:06:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.436 08:06:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.436 08:06:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.436 ************************************ 00:06:41.436 START TEST locking_app_on_locked_coremask 00:06:41.436 ************************************ 00:06:41.436 08:06:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:41.436 08:06:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60239 00:06:41.436 08:06:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.436 08:06:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60239 /var/tmp/spdk.sock 00:06:41.436 08:06:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60239 ']' 00:06:41.436 08:06:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.436 08:06:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.436 08:06:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.436 08:06:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.436 08:06:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.436 [2024-11-17 08:06:46.417686] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:41.436 [2024-11-17 08:06:46.418025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60239 ] 00:06:41.696 [2024-11-17 08:06:46.605787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.955 [2024-11-17 08:06:46.750096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.215 [2024-11-17 08:06:47.074456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60261 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60261 /var/tmp/spdk2.sock 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60261 /var/tmp/spdk2.sock 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60261 /var/tmp/spdk2.sock 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60261 ']' 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.155 08:06:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.155 [2024-11-17 08:06:47.931732] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:43.155 [2024-11-17 08:06:47.931873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60261 ] 00:06:43.155 [2024-11-17 08:06:48.117860] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60239 has claimed it. 00:06:43.155 [2024-11-17 08:06:48.117936] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.724 ERROR: process (pid: 60261) is no longer running 00:06:43.724 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60261) - No such process 00:06:43.724 08:06:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.724 08:06:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:43.724 08:06:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:43.724 08:06:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.724 08:06:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.724 08:06:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.724 08:06:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60239 00:06:43.724 08:06:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60239 00:06:43.724 08:06:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.293 08:06:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60239 00:06:44.293 08:06:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60239 ']' 00:06:44.293 08:06:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60239 00:06:44.293 08:06:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:44.293 08:06:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.293 08:06:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60239 00:06:44.293 08:06:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.293 08:06:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.293 killing process with pid 60239 00:06:44.293 08:06:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60239' 00:06:44.293 08:06:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60239 00:06:44.293 08:06:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60239 00:06:47.588 00:06:47.588 real 0m5.585s 00:06:47.588 user 0m5.522s 00:06:47.588 sys 0m1.151s 00:06:47.588 08:06:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.588 08:06:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.588 ************************************ 00:06:47.588 END TEST locking_app_on_locked_coremask 00:06:47.588 ************************************ 00:06:47.588 08:06:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.588 08:06:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.588 08:06:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.588 08:06:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.588 ************************************ 00:06:47.588 START TEST locking_overlapped_coremask 00:06:47.588 ************************************ 00:06:47.588 08:06:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:47.588 08:06:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60336 00:06:47.588 08:06:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.588 08:06:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60336 /var/tmp/spdk.sock 00:06:47.588 08:06:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60336 ']' 00:06:47.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.588 08:06:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.588 08:06:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.588 08:06:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.588 08:06:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.588 08:06:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.588 [2024-11-17 08:06:52.071864] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:47.589 [2024-11-17 08:06:52.072254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60336 ] 00:06:47.589 [2024-11-17 08:06:52.258869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.589 [2024-11-17 08:06:52.408030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.589 [2024-11-17 08:06:52.408156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.589 [2024-11-17 08:06:52.408195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.848 [2024-11-17 08:06:52.733786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60359 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60359 /var/tmp/spdk2.sock 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60359 /var/tmp/spdk2.sock 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60359 /var/tmp/spdk2.sock 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60359 ']' 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.789 08:06:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.789 [2024-11-17 08:06:53.615674] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:48.789 [2024-11-17 08:06:53.616064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60359 ] 00:06:49.049 [2024-11-17 08:06:53.808428] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60336 has claimed it. 00:06:49.049 [2024-11-17 08:06:53.808535] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.309 ERROR: process (pid: 60359) is no longer running 00:06:49.309 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60359) - No such process 00:06:49.309 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.309 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:49.309 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:49.309 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.309 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.309 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.309 08:06:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.309 08:06:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.309 08:06:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.309 08:06:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.309 08:06:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60336 00:06:49.310 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60336 ']' 00:06:49.310 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60336 00:06:49.310 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.310 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.310 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60336 00:06:49.310 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.310 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.310 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60336' 00:06:49.310 killing process with pid 60336 00:06:49.310 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60336 00:06:49.310 08:06:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60336 00:06:52.606 ************************************ 00:06:52.606 END TEST locking_overlapped_coremask 00:06:52.606 ************************************ 00:06:52.606 00:06:52.606 real 0m5.053s 00:06:52.606 user 0m13.426s 00:06:52.606 sys 0m0.917s 00:06:52.606 08:06:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.606 08:06:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.606 08:06:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:52.606 08:06:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.606 08:06:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.606 08:06:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.606 ************************************ 00:06:52.606 START TEST locking_overlapped_coremask_via_rpc 00:06:52.606 ************************************ 00:06:52.606 08:06:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:52.606 08:06:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60429 00:06:52.606 08:06:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:52.606 08:06:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60429 /var/tmp/spdk.sock 00:06:52.606 08:06:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60429 ']' 00:06:52.606 08:06:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.606 08:06:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.606 08:06:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.606 08:06:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.606 08:06:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.606 [2024-11-17 08:06:57.196312] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:52.606 [2024-11-17 08:06:57.196705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60429 ] 00:06:52.606 [2024-11-17 08:06:57.385171] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.606 [2024-11-17 08:06:57.385437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.606 [2024-11-17 08:06:57.525906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.606 [2024-11-17 08:06:57.526038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.606 [2024-11-17 08:06:57.526071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.865 [2024-11-17 08:06:57.838247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.801 08:06:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.801 08:06:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.801 08:06:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60447 00:06:53.801 08:06:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:53.801 08:06:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60447 /var/tmp/spdk2.sock 00:06:53.801 08:06:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60447 ']' 00:06:53.801 08:06:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.801 08:06:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.801 08:06:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.801 08:06:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.801 08:06:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.801 [2024-11-17 08:06:58.722427] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:53.801 [2024-11-17 08:06:58.723275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60447 ] 00:06:54.060 [2024-11-17 08:06:58.911338] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.060 [2024-11-17 08:06:58.911387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.318 [2024-11-17 08:06:59.222294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.318 [2024-11-17 08:06:59.222425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.318 [2024-11-17 08:06:59.222490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.885 [2024-11-17 08:06:59.853948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.294 [2024-11-17 08:07:01.260622] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60429 has claimed it. 00:06:56.294 request: 00:06:56.294 { 00:06:56.294 "method": "framework_enable_cpumask_locks", 00:06:56.294 "req_id": 1 00:06:56.294 } 00:06:56.294 Got JSON-RPC error response 00:06:56.294 response: 00:06:56.294 { 00:06:56.294 "code": -32603, 00:06:56.294 "message": "Failed to claim CPU core: 2" 00:06:56.294 } 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60429 /var/tmp/spdk.sock 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60429 ']' 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.294 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.553 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.553 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.553 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60447 /var/tmp/spdk2.sock 00:06:56.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.553 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60447 ']' 00:06:56.553 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.553 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.553 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.553 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.553 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.813 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.813 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.813 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:56.813 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.813 ************************************ 00:06:56.813 END TEST locking_overlapped_coremask_via_rpc 00:06:56.813 ************************************ 00:06:56.813 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.813 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.813 00:06:56.813 real 0m4.644s 00:06:56.813 user 0m1.252s 00:06:56.813 sys 0m0.264s 00:06:56.813 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.813 08:07:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.813 08:07:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:56.813 08:07:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60429 ]] 00:06:56.813 08:07:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60429 00:06:56.813 08:07:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60429 ']' 00:06:56.813 08:07:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60429 00:06:56.813 08:07:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:56.813 08:07:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.813 08:07:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60429 00:06:57.073 08:07:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.073 08:07:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.073 killing process with pid 60429 00:06:57.073 08:07:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60429' 00:06:57.073 08:07:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60429 00:06:57.073 08:07:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60429 00:06:59.610 08:07:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60447 ]] 00:06:59.610 08:07:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60447 00:06:59.610 08:07:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60447 ']' 00:06:59.610 08:07:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60447 00:06:59.610 08:07:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:59.610 08:07:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.610 08:07:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60447 00:06:59.610 killing process with pid 60447 00:06:59.610 08:07:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:59.610 08:07:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:59.610 08:07:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60447' 00:06:59.610 08:07:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60447 00:06:59.610 08:07:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60447 00:07:02.901 08:07:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:02.901 08:07:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:02.901 Process with pid 60429 is not found 00:07:02.901 08:07:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60429 ]] 00:07:02.901 08:07:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60429 00:07:02.901 08:07:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60429 ']' 00:07:02.901 08:07:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60429 00:07:02.901 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60429) - No such process 00:07:02.901 08:07:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60429 is not found' 00:07:02.901 08:07:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60447 ]] 00:07:02.901 08:07:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60447 00:07:02.901 08:07:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60447 ']' 00:07:02.901 08:07:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60447 00:07:02.901 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60447) - No such process 00:07:02.901 08:07:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60447 is not found' 00:07:02.901 Process with pid 60447 is not found 00:07:02.901 08:07:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:02.901 ************************************ 00:07:02.901 END TEST cpu_locks 00:07:02.901 ************************************ 00:07:02.901 00:07:02.901 real 0m56.024s 00:07:02.901 user 1m32.290s 00:07:02.901 sys 0m9.457s 00:07:02.901 08:07:07 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.901 08:07:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.901 ************************************ 00:07:02.901 END TEST event 00:07:02.901 ************************************ 00:07:02.901 00:07:02.901 real 1m27.031s 00:07:02.901 user 2m30.204s 00:07:02.901 sys 0m14.580s 00:07:02.901 08:07:07 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.901 08:07:07 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.901 08:07:07 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:02.901 08:07:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.901 08:07:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.901 08:07:07 -- common/autotest_common.sh@10 -- # set +x 00:07:02.901 ************************************ 00:07:02.901 START TEST thread 00:07:02.901 ************************************ 00:07:02.901 08:07:07 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:02.901 * Looking for test storage... 00:07:02.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:02.901 08:07:07 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.901 08:07:07 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.901 08:07:07 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.901 08:07:07 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.901 08:07:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.901 08:07:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.901 08:07:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.901 08:07:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.901 08:07:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.901 08:07:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.901 08:07:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.901 08:07:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.901 08:07:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.901 08:07:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.901 08:07:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.901 08:07:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:02.901 08:07:07 thread -- scripts/common.sh@345 -- # : 1 00:07:02.901 08:07:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.901 08:07:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.901 08:07:07 thread -- scripts/common.sh@365 -- # decimal 1 00:07:02.901 08:07:07 thread -- scripts/common.sh@353 -- # local d=1 00:07:02.901 08:07:07 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.901 08:07:07 thread -- scripts/common.sh@355 -- # echo 1 00:07:02.901 08:07:07 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.901 08:07:07 thread -- scripts/common.sh@366 -- # decimal 2 00:07:02.901 08:07:07 thread -- scripts/common.sh@353 -- # local d=2 00:07:02.901 08:07:07 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.901 08:07:07 thread -- scripts/common.sh@355 -- # echo 2 00:07:02.901 08:07:07 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.901 08:07:07 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.901 08:07:07 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.901 08:07:07 thread -- scripts/common.sh@368 -- # return 0 00:07:02.901 08:07:07 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.901 08:07:07 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.901 --rc genhtml_branch_coverage=1 00:07:02.901 --rc genhtml_function_coverage=1 00:07:02.901 --rc genhtml_legend=1 00:07:02.901 --rc geninfo_all_blocks=1 00:07:02.901 --rc geninfo_unexecuted_blocks=1 00:07:02.901 00:07:02.901 ' 00:07:02.901 08:07:07 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.901 --rc genhtml_branch_coverage=1 00:07:02.901 --rc genhtml_function_coverage=1 00:07:02.901 --rc genhtml_legend=1 00:07:02.901 --rc geninfo_all_blocks=1 00:07:02.901 --rc geninfo_unexecuted_blocks=1 00:07:02.901 00:07:02.901 ' 00:07:02.901 08:07:07 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.901 --rc genhtml_branch_coverage=1 00:07:02.901 --rc genhtml_function_coverage=1 00:07:02.901 --rc genhtml_legend=1 00:07:02.902 --rc geninfo_all_blocks=1 00:07:02.902 --rc geninfo_unexecuted_blocks=1 00:07:02.902 00:07:02.902 ' 00:07:02.902 08:07:07 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.902 --rc genhtml_branch_coverage=1 00:07:02.902 --rc genhtml_function_coverage=1 00:07:02.902 --rc genhtml_legend=1 00:07:02.902 --rc geninfo_all_blocks=1 00:07:02.902 --rc geninfo_unexecuted_blocks=1 00:07:02.902 00:07:02.902 ' 00:07:02.902 08:07:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:02.902 08:07:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:02.902 08:07:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.902 08:07:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.902 ************************************ 00:07:02.902 START TEST thread_poller_perf 00:07:02.902 ************************************ 00:07:02.902 08:07:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:02.902 [2024-11-17 08:07:07.685069] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:02.902 [2024-11-17 08:07:07.685393] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60653 ] 00:07:02.902 [2024-11-17 08:07:07.869099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.161 [2024-11-17 08:07:08.014960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.161 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:04.540 [2024-11-17T08:07:09.552Z] ====================================== 00:07:04.540 [2024-11-17T08:07:09.552Z] busy:2499589068 (cyc) 00:07:04.540 [2024-11-17T08:07:09.552Z] total_run_count: 411000 00:07:04.540 [2024-11-17T08:07:09.552Z] tsc_hz: 2490000000 (cyc) 00:07:04.540 [2024-11-17T08:07:09.552Z] ====================================== 00:07:04.540 [2024-11-17T08:07:09.552Z] poller_cost: 6081 (cyc), 2442 (nsec) 00:07:04.540 00:07:04.540 real 0m1.636s 00:07:04.540 user 0m1.395s 00:07:04.540 sys 0m0.132s 00:07:04.540 08:07:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.540 08:07:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.540 ************************************ 00:07:04.540 END TEST thread_poller_perf 00:07:04.540 ************************************ 00:07:04.540 08:07:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.540 08:07:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:04.540 08:07:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.540 08:07:09 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.540 ************************************ 00:07:04.540 START TEST thread_poller_perf 00:07:04.540 ************************************ 00:07:04.540 08:07:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.540 [2024-11-17 08:07:09.402325] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:04.540 [2024-11-17 08:07:09.402491] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60695 ] 00:07:04.799 [2024-11-17 08:07:09.586613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.799 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:04.799 [2024-11-17 08:07:09.721881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.179 [2024-11-17T08:07:11.191Z] ====================================== 00:07:06.179 [2024-11-17T08:07:11.191Z] busy:2494137912 (cyc) 00:07:06.179 [2024-11-17T08:07:11.191Z] total_run_count: 5400000 00:07:06.179 [2024-11-17T08:07:11.191Z] tsc_hz: 2490000000 (cyc) 00:07:06.179 [2024-11-17T08:07:11.191Z] ====================================== 00:07:06.179 [2024-11-17T08:07:11.191Z] poller_cost: 461 (cyc), 185 (nsec) 00:07:06.179 00:07:06.179 real 0m1.625s 00:07:06.179 user 0m1.377s 00:07:06.179 sys 0m0.140s 00:07:06.179 ************************************ 00:07:06.179 END TEST thread_poller_perf 00:07:06.179 ************************************ 00:07:06.179 08:07:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.180 08:07:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.180 08:07:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:06.180 ************************************ 00:07:06.180 END TEST thread 00:07:06.180 ************************************ 00:07:06.180 00:07:06.180 real 0m3.648s 00:07:06.180 user 0m2.936s 00:07:06.180 sys 0m0.505s 00:07:06.180 08:07:11 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.180 08:07:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.180 08:07:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:06.180 08:07:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:06.180 08:07:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.180 08:07:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.180 08:07:11 -- common/autotest_common.sh@10 -- # set +x 00:07:06.180 ************************************ 00:07:06.180 START TEST app_cmdline 00:07:06.180 ************************************ 00:07:06.180 08:07:11 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:06.439 * Looking for test storage... 00:07:06.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:06.439 08:07:11 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.439 08:07:11 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.439 08:07:11 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.439 08:07:11 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.439 08:07:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.439 08:07:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.439 08:07:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.439 08:07:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.439 08:07:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.439 08:07:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.439 08:07:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.439 08:07:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.440 08:07:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:06.440 08:07:11 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.440 08:07:11 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.440 --rc genhtml_branch_coverage=1 00:07:06.440 --rc genhtml_function_coverage=1 00:07:06.440 --rc genhtml_legend=1 00:07:06.440 --rc geninfo_all_blocks=1 00:07:06.440 --rc geninfo_unexecuted_blocks=1 00:07:06.440 00:07:06.440 ' 00:07:06.440 08:07:11 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.440 --rc genhtml_branch_coverage=1 00:07:06.440 --rc genhtml_function_coverage=1 00:07:06.440 --rc genhtml_legend=1 00:07:06.440 --rc geninfo_all_blocks=1 00:07:06.440 --rc geninfo_unexecuted_blocks=1 00:07:06.440 00:07:06.440 ' 00:07:06.440 08:07:11 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.440 --rc genhtml_branch_coverage=1 00:07:06.440 --rc genhtml_function_coverage=1 00:07:06.440 --rc genhtml_legend=1 00:07:06.440 --rc geninfo_all_blocks=1 00:07:06.440 --rc geninfo_unexecuted_blocks=1 00:07:06.440 00:07:06.440 ' 00:07:06.440 08:07:11 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.440 --rc genhtml_branch_coverage=1 00:07:06.440 --rc genhtml_function_coverage=1 00:07:06.440 --rc genhtml_legend=1 00:07:06.440 --rc geninfo_all_blocks=1 00:07:06.440 --rc geninfo_unexecuted_blocks=1 00:07:06.440 00:07:06.440 ' 00:07:06.440 08:07:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:06.440 08:07:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60779 00:07:06.440 08:07:11 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:06.440 08:07:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60779 00:07:06.440 08:07:11 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60779 ']' 00:07:06.440 08:07:11 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.440 08:07:11 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.440 08:07:11 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.440 08:07:11 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.440 08:07:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.699 [2024-11-17 08:07:11.468974] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:06.699 [2024-11-17 08:07:11.469115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60779 ] 00:07:06.699 [2024-11-17 08:07:11.653085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.959 [2024-11-17 08:07:11.799887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.218 [2024-11-17 08:07:12.112148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.156 08:07:12 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.156 08:07:12 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:08.156 08:07:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:08.157 { 00:07:08.157 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:07:08.157 "fields": { 00:07:08.157 "major": 25, 00:07:08.157 "minor": 1, 00:07:08.157 "patch": 0, 00:07:08.157 "suffix": "-pre", 00:07:08.157 "commit": "83e8405e4" 00:07:08.157 } 00:07:08.157 } 00:07:08.157 08:07:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:08.157 08:07:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:08.157 08:07:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:08.157 08:07:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:08.157 08:07:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:08.157 08:07:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:08.157 08:07:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.157 08:07:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:08.157 08:07:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:08.157 08:07:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:08.157 08:07:13 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.416 request: 00:07:08.416 { 00:07:08.416 "method": "env_dpdk_get_mem_stats", 00:07:08.416 "req_id": 1 00:07:08.416 } 00:07:08.416 Got JSON-RPC error response 00:07:08.416 response: 00:07:08.416 { 00:07:08.416 "code": -32601, 00:07:08.416 "message": "Method not found" 00:07:08.416 } 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.416 08:07:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60779 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60779 ']' 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60779 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60779 00:07:08.416 killing process with pid 60779 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60779' 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@973 -- # kill 60779 00:07:08.416 08:07:13 app_cmdline -- common/autotest_common.sh@978 -- # wait 60779 00:07:10.953 00:07:10.953 real 0m4.823s 00:07:10.953 user 0m4.789s 00:07:10.953 sys 0m0.880s 00:07:10.953 08:07:15 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.953 ************************************ 00:07:10.953 END TEST app_cmdline 00:07:10.953 ************************************ 00:07:10.953 08:07:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.213 08:07:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:11.213 08:07:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.213 08:07:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.213 08:07:15 -- common/autotest_common.sh@10 -- # set +x 00:07:11.213 ************************************ 00:07:11.213 START TEST version 00:07:11.213 ************************************ 00:07:11.213 08:07:15 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:11.213 * Looking for test storage... 00:07:11.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:11.213 08:07:16 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.213 08:07:16 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.213 08:07:16 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.213 08:07:16 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.213 08:07:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.213 08:07:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.213 08:07:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.213 08:07:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.213 08:07:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.213 08:07:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.213 08:07:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.213 08:07:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.213 08:07:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.213 08:07:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.213 08:07:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.213 08:07:16 version -- scripts/common.sh@344 -- # case "$op" in 00:07:11.213 08:07:16 version -- scripts/common.sh@345 -- # : 1 00:07:11.213 08:07:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.213 08:07:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.213 08:07:16 version -- scripts/common.sh@365 -- # decimal 1 00:07:11.213 08:07:16 version -- scripts/common.sh@353 -- # local d=1 00:07:11.213 08:07:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.213 08:07:16 version -- scripts/common.sh@355 -- # echo 1 00:07:11.213 08:07:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.213 08:07:16 version -- scripts/common.sh@366 -- # decimal 2 00:07:11.213 08:07:16 version -- scripts/common.sh@353 -- # local d=2 00:07:11.213 08:07:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.213 08:07:16 version -- scripts/common.sh@355 -- # echo 2 00:07:11.213 08:07:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.213 08:07:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.213 08:07:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.213 08:07:16 version -- scripts/common.sh@368 -- # return 0 00:07:11.213 08:07:16 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.472 08:07:16 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.472 --rc genhtml_branch_coverage=1 00:07:11.472 --rc genhtml_function_coverage=1 00:07:11.472 --rc genhtml_legend=1 00:07:11.472 --rc geninfo_all_blocks=1 00:07:11.472 --rc geninfo_unexecuted_blocks=1 00:07:11.472 00:07:11.472 ' 00:07:11.472 08:07:16 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.472 --rc genhtml_branch_coverage=1 00:07:11.472 --rc genhtml_function_coverage=1 00:07:11.472 --rc genhtml_legend=1 00:07:11.472 --rc geninfo_all_blocks=1 00:07:11.472 --rc geninfo_unexecuted_blocks=1 00:07:11.472 00:07:11.472 ' 00:07:11.472 08:07:16 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.472 --rc genhtml_branch_coverage=1 00:07:11.472 --rc genhtml_function_coverage=1 00:07:11.472 --rc genhtml_legend=1 00:07:11.472 --rc geninfo_all_blocks=1 00:07:11.472 --rc geninfo_unexecuted_blocks=1 00:07:11.472 00:07:11.472 ' 00:07:11.472 08:07:16 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.472 --rc genhtml_branch_coverage=1 00:07:11.472 --rc genhtml_function_coverage=1 00:07:11.472 --rc genhtml_legend=1 00:07:11.472 --rc geninfo_all_blocks=1 00:07:11.472 --rc geninfo_unexecuted_blocks=1 00:07:11.472 00:07:11.472 ' 00:07:11.472 08:07:16 version -- app/version.sh@17 -- # get_header_version major 00:07:11.472 08:07:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:11.472 08:07:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:11.472 08:07:16 version -- app/version.sh@14 -- # cut -f2 00:07:11.472 08:07:16 version -- app/version.sh@17 -- # major=25 00:07:11.472 08:07:16 version -- app/version.sh@18 -- # get_header_version minor 00:07:11.472 08:07:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:11.472 08:07:16 version -- app/version.sh@14 -- # cut -f2 00:07:11.472 08:07:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:11.472 08:07:16 version -- app/version.sh@18 -- # minor=1 00:07:11.472 08:07:16 version -- app/version.sh@19 -- # get_header_version patch 00:07:11.472 08:07:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:11.472 08:07:16 version -- app/version.sh@14 -- # cut -f2 00:07:11.472 08:07:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:11.473 08:07:16 version -- app/version.sh@19 -- # patch=0 00:07:11.473 08:07:16 version -- app/version.sh@20 -- # get_header_version suffix 00:07:11.473 08:07:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:11.473 08:07:16 version -- app/version.sh@14 -- # cut -f2 00:07:11.473 08:07:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:11.473 08:07:16 version -- app/version.sh@20 -- # suffix=-pre 00:07:11.473 08:07:16 version -- app/version.sh@22 -- # version=25.1 00:07:11.473 08:07:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:11.473 08:07:16 version -- app/version.sh@28 -- # version=25.1rc0 00:07:11.473 08:07:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:11.473 08:07:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:11.473 08:07:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:11.473 08:07:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:11.473 ************************************ 00:07:11.473 END TEST version 00:07:11.473 ************************************ 00:07:11.473 00:07:11.473 real 0m0.324s 00:07:11.473 user 0m0.189s 00:07:11.473 sys 0m0.191s 00:07:11.473 08:07:16 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.473 08:07:16 version -- common/autotest_common.sh@10 -- # set +x 00:07:11.473 08:07:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:11.473 08:07:16 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:11.473 08:07:16 -- spdk/autotest.sh@194 -- # uname -s 00:07:11.473 08:07:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:11.473 08:07:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:11.473 08:07:16 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:11.473 08:07:16 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:11.473 08:07:16 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:11.473 08:07:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.473 08:07:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.473 08:07:16 -- common/autotest_common.sh@10 -- # set +x 00:07:11.473 ************************************ 00:07:11.473 START TEST spdk_dd 00:07:11.473 ************************************ 00:07:11.473 08:07:16 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:11.733 * Looking for test storage... 00:07:11.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:11.733 08:07:16 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.733 08:07:16 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.733 08:07:16 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.733 08:07:16 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:11.733 08:07:16 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.733 08:07:16 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.733 --rc genhtml_branch_coverage=1 00:07:11.733 --rc genhtml_function_coverage=1 00:07:11.733 --rc genhtml_legend=1 00:07:11.733 --rc geninfo_all_blocks=1 00:07:11.733 --rc geninfo_unexecuted_blocks=1 00:07:11.733 00:07:11.733 ' 00:07:11.733 08:07:16 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.733 --rc genhtml_branch_coverage=1 00:07:11.733 --rc genhtml_function_coverage=1 00:07:11.733 --rc genhtml_legend=1 00:07:11.733 --rc geninfo_all_blocks=1 00:07:11.733 --rc geninfo_unexecuted_blocks=1 00:07:11.733 00:07:11.733 ' 00:07:11.733 08:07:16 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.733 --rc genhtml_branch_coverage=1 00:07:11.733 --rc genhtml_function_coverage=1 00:07:11.733 --rc genhtml_legend=1 00:07:11.733 --rc geninfo_all_blocks=1 00:07:11.733 --rc geninfo_unexecuted_blocks=1 00:07:11.733 00:07:11.733 ' 00:07:11.733 08:07:16 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.733 --rc genhtml_branch_coverage=1 00:07:11.733 --rc genhtml_function_coverage=1 00:07:11.733 --rc genhtml_legend=1 00:07:11.733 --rc geninfo_all_blocks=1 00:07:11.733 --rc geninfo_unexecuted_blocks=1 00:07:11.733 00:07:11.733 ' 00:07:11.733 08:07:16 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.733 08:07:16 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.733 08:07:16 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.733 08:07:16 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.733 08:07:16 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.733 08:07:16 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:11.733 08:07:16 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.733 08:07:16 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:12.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.323 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:12.323 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:12.323 08:07:17 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:12.323 08:07:17 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:12.323 08:07:17 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:12.324 08:07:17 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:12.324 08:07:17 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:12.324 08:07:17 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:12.324 08:07:17 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:12.324 08:07:17 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:12.324 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.324 08:07:17 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:12.324 08:07:17 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.585 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:12.586 * spdk_dd linked to liburing 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:12.586 08:07:17 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:12.586 08:07:17 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:12.587 08:07:17 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:12.587 08:07:17 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:12.587 08:07:17 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:12.587 08:07:17 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:12.587 08:07:17 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:12.587 08:07:17 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:12.587 08:07:17 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:12.587 08:07:17 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:12.587 08:07:17 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.587 08:07:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:12.587 ************************************ 00:07:12.587 START TEST spdk_dd_basic_rw 00:07:12.587 ************************************ 00:07:12.587 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:12.587 * Looking for test storage... 00:07:12.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:12.587 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:12.587 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:07:12.587 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:12.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.847 --rc genhtml_branch_coverage=1 00:07:12.847 --rc genhtml_function_coverage=1 00:07:12.847 --rc genhtml_legend=1 00:07:12.847 --rc geninfo_all_blocks=1 00:07:12.847 --rc geninfo_unexecuted_blocks=1 00:07:12.847 00:07:12.847 ' 00:07:12.847 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:12.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.847 --rc genhtml_branch_coverage=1 00:07:12.847 --rc genhtml_function_coverage=1 00:07:12.847 --rc genhtml_legend=1 00:07:12.847 --rc geninfo_all_blocks=1 00:07:12.847 --rc geninfo_unexecuted_blocks=1 00:07:12.847 00:07:12.847 ' 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:12.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.848 --rc genhtml_branch_coverage=1 00:07:12.848 --rc genhtml_function_coverage=1 00:07:12.848 --rc genhtml_legend=1 00:07:12.848 --rc geninfo_all_blocks=1 00:07:12.848 --rc geninfo_unexecuted_blocks=1 00:07:12.848 00:07:12.848 ' 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:12.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.848 --rc genhtml_branch_coverage=1 00:07:12.848 --rc genhtml_function_coverage=1 00:07:12.848 --rc genhtml_legend=1 00:07:12.848 --rc geninfo_all_blocks=1 00:07:12.848 --rc geninfo_unexecuted_blocks=1 00:07:12.848 00:07:12.848 ' 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:12.848 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:13.110 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:13.110 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.111 ************************************ 00:07:13.111 START TEST dd_bs_lt_native_bs 00:07:13.111 ************************************ 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:13.111 08:07:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:13.111 { 00:07:13.111 "subsystems": [ 00:07:13.111 { 00:07:13.111 "subsystem": "bdev", 00:07:13.111 "config": [ 00:07:13.111 { 00:07:13.111 "params": { 00:07:13.111 "trtype": "pcie", 00:07:13.111 "traddr": "0000:00:10.0", 00:07:13.111 "name": "Nvme0" 00:07:13.111 }, 00:07:13.111 "method": "bdev_nvme_attach_controller" 00:07:13.111 }, 00:07:13.111 { 00:07:13.111 "method": "bdev_wait_for_examine" 00:07:13.111 } 00:07:13.111 ] 00:07:13.111 } 00:07:13.111 ] 00:07:13.111 } 00:07:13.111 [2024-11-17 08:07:18.065432] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:13.111 [2024-11-17 08:07:18.065587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61165 ] 00:07:13.371 [2024-11-17 08:07:18.251709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.630 [2024-11-17 08:07:18.393458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.630 [2024-11-17 08:07:18.639961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.890 [2024-11-17 08:07:18.864798] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:13.890 [2024-11-17 08:07:18.864893] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:14.826 [2024-11-17 08:07:19.565202] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:15.086 00:07:15.086 real 0m1.899s 00:07:15.086 user 0m1.501s 00:07:15.086 sys 0m0.344s 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:15.086 ************************************ 00:07:15.086 END TEST dd_bs_lt_native_bs 00:07:15.086 ************************************ 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.086 ************************************ 00:07:15.086 START TEST dd_rw 00:07:15.086 ************************************ 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:15.086 08:07:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.655 08:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:15.655 08:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:15.655 08:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.655 08:07:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.655 { 00:07:15.655 "subsystems": [ 00:07:15.655 { 00:07:15.655 "subsystem": "bdev", 00:07:15.655 "config": [ 00:07:15.655 { 00:07:15.655 "params": { 00:07:15.655 "trtype": "pcie", 00:07:15.655 "traddr": "0000:00:10.0", 00:07:15.655 "name": "Nvme0" 00:07:15.655 }, 00:07:15.655 "method": "bdev_nvme_attach_controller" 00:07:15.655 }, 00:07:15.655 { 00:07:15.655 "method": "bdev_wait_for_examine" 00:07:15.655 } 00:07:15.655 ] 00:07:15.655 } 00:07:15.655 ] 00:07:15.655 } 00:07:15.655 [2024-11-17 08:07:20.510375] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:15.655 [2024-11-17 08:07:20.510521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61208 ] 00:07:15.915 [2024-11-17 08:07:20.694985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.915 [2024-11-17 08:07:20.838144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.174 [2024-11-17 08:07:21.076483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.433  [2024-11-17T08:07:22.823Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:17.811 00:07:17.811 08:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:17.811 08:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:17.811 08:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.811 08:07:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.811 { 00:07:17.811 "subsystems": [ 00:07:17.811 { 00:07:17.811 "subsystem": "bdev", 00:07:17.811 "config": [ 00:07:17.811 { 00:07:17.811 "params": { 00:07:17.811 "trtype": "pcie", 00:07:17.811 "traddr": "0000:00:10.0", 00:07:17.811 "name": "Nvme0" 00:07:17.811 }, 00:07:17.811 "method": "bdev_nvme_attach_controller" 00:07:17.811 }, 00:07:17.811 { 00:07:17.811 "method": "bdev_wait_for_examine" 00:07:17.811 } 00:07:17.811 ] 00:07:17.811 } 00:07:17.811 ] 00:07:17.811 } 00:07:17.811 [2024-11-17 08:07:22.614111] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:17.811 [2024-11-17 08:07:22.614239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61239 ] 00:07:17.811 [2024-11-17 08:07:22.800199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.070 [2024-11-17 08:07:22.939093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.329 [2024-11-17 08:07:23.175411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.587  [2024-11-17T08:07:24.536Z] Copying: 60/60 [kB] (average 14 MBps) 00:07:19.524 00:07:19.524 08:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.524 08:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:19.524 08:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:19.524 08:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:19.524 08:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:19.524 08:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:19.524 08:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:19.524 08:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:19.524 08:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:19.524 08:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:19.524 08:07:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.524 { 00:07:19.524 "subsystems": [ 00:07:19.524 { 00:07:19.524 "subsystem": "bdev", 00:07:19.524 "config": [ 00:07:19.524 { 00:07:19.524 "params": { 00:07:19.524 "trtype": "pcie", 00:07:19.524 "traddr": "0000:00:10.0", 00:07:19.524 "name": "Nvme0" 00:07:19.524 }, 00:07:19.524 "method": "bdev_nvme_attach_controller" 00:07:19.524 }, 00:07:19.524 { 00:07:19.524 "method": "bdev_wait_for_examine" 00:07:19.524 } 00:07:19.524 ] 00:07:19.524 } 00:07:19.524 ] 00:07:19.524 } 00:07:19.524 [2024-11-17 08:07:24.484068] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:19.524 [2024-11-17 08:07:24.484226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61272 ] 00:07:19.783 [2024-11-17 08:07:24.672836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.041 [2024-11-17 08:07:24.821302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.300 [2024-11-17 08:07:25.071195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.300  [2024-11-17T08:07:26.689Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:21.677 00:07:21.677 08:07:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:21.677 08:07:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:21.677 08:07:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:21.677 08:07:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:21.677 08:07:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:21.677 08:07:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:21.677 08:07:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.246 08:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:22.246 08:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:22.246 08:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:22.246 08:07:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.246 { 00:07:22.246 "subsystems": [ 00:07:22.246 { 00:07:22.246 "subsystem": "bdev", 00:07:22.246 "config": [ 00:07:22.246 { 00:07:22.246 "params": { 00:07:22.246 "trtype": "pcie", 00:07:22.246 "traddr": "0000:00:10.0", 00:07:22.246 "name": "Nvme0" 00:07:22.246 }, 00:07:22.246 "method": "bdev_nvme_attach_controller" 00:07:22.246 }, 00:07:22.246 { 00:07:22.246 "method": "bdev_wait_for_examine" 00:07:22.246 } 00:07:22.246 ] 00:07:22.246 } 00:07:22.246 ] 00:07:22.246 } 00:07:22.246 [2024-11-17 08:07:27.154212] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:22.246 [2024-11-17 08:07:27.154355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61309 ] 00:07:22.506 [2024-11-17 08:07:27.329018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.765 [2024-11-17 08:07:27.521428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.765 [2024-11-17 08:07:27.771107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.024  [2024-11-17T08:07:29.415Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:24.403 00:07:24.403 08:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:24.403 08:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:24.403 08:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:24.403 08:07:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.403 { 00:07:24.403 "subsystems": [ 00:07:24.403 { 00:07:24.403 "subsystem": "bdev", 00:07:24.403 "config": [ 00:07:24.403 { 00:07:24.403 "params": { 00:07:24.403 "trtype": "pcie", 00:07:24.403 "traddr": "0000:00:10.0", 00:07:24.403 "name": "Nvme0" 00:07:24.403 }, 00:07:24.403 "method": "bdev_nvme_attach_controller" 00:07:24.403 }, 00:07:24.403 { 00:07:24.403 "method": "bdev_wait_for_examine" 00:07:24.403 } 00:07:24.403 ] 00:07:24.403 } 00:07:24.403 ] 00:07:24.403 } 00:07:24.403 [2024-11-17 08:07:29.118908] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:24.403 [2024-11-17 08:07:29.119074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61340 ] 00:07:24.403 [2024-11-17 08:07:29.304705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.661 [2024-11-17 08:07:29.454767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.921 [2024-11-17 08:07:29.709035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.183  [2024-11-17T08:07:31.174Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:26.162 00:07:26.422 08:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.422 08:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:26.422 08:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:26.422 08:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:26.422 08:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:26.422 08:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:26.422 08:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:26.422 08:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:26.422 08:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:26.422 08:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:26.422 08:07:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.422 { 00:07:26.422 "subsystems": [ 00:07:26.422 { 00:07:26.422 "subsystem": "bdev", 00:07:26.422 "config": [ 00:07:26.422 { 00:07:26.422 "params": { 00:07:26.422 "trtype": "pcie", 00:07:26.422 "traddr": "0000:00:10.0", 00:07:26.422 "name": "Nvme0" 00:07:26.422 }, 00:07:26.422 "method": "bdev_nvme_attach_controller" 00:07:26.422 }, 00:07:26.422 { 00:07:26.422 "method": "bdev_wait_for_examine" 00:07:26.422 } 00:07:26.422 ] 00:07:26.422 } 00:07:26.422 ] 00:07:26.422 } 00:07:26.422 [2024-11-17 08:07:31.295474] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:26.422 [2024-11-17 08:07:31.295609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61373 ] 00:07:26.681 [2024-11-17 08:07:31.483023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.681 [2024-11-17 08:07:31.622302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.940 [2024-11-17 08:07:31.856606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.199  [2024-11-17T08:07:33.149Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:28.137 00:07:28.137 08:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:28.137 08:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:28.137 08:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:28.137 08:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:28.137 08:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:28.137 08:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:28.137 08:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:28.137 08:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.706 08:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:28.706 08:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:28.706 08:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:28.706 08:07:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.706 { 00:07:28.706 "subsystems": [ 00:07:28.706 { 00:07:28.706 "subsystem": "bdev", 00:07:28.706 "config": [ 00:07:28.706 { 00:07:28.706 "params": { 00:07:28.706 "trtype": "pcie", 00:07:28.706 "traddr": "0000:00:10.0", 00:07:28.706 "name": "Nvme0" 00:07:28.706 }, 00:07:28.706 "method": "bdev_nvme_attach_controller" 00:07:28.706 }, 00:07:28.706 { 00:07:28.706 "method": "bdev_wait_for_examine" 00:07:28.706 } 00:07:28.706 ] 00:07:28.706 } 00:07:28.706 ] 00:07:28.706 } 00:07:28.706 [2024-11-17 08:07:33.598267] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:28.706 [2024-11-17 08:07:33.598412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61404 ] 00:07:28.965 [2024-11-17 08:07:33.786663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.965 [2024-11-17 08:07:33.930140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.224 [2024-11-17 08:07:34.172535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.483  [2024-11-17T08:07:35.874Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:30.862 00:07:30.862 08:07:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:30.862 08:07:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:30.862 08:07:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:30.862 08:07:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.862 { 00:07:30.862 "subsystems": [ 00:07:30.862 { 00:07:30.862 "subsystem": "bdev", 00:07:30.862 "config": [ 00:07:30.862 { 00:07:30.862 "params": { 00:07:30.862 "trtype": "pcie", 00:07:30.862 "traddr": "0000:00:10.0", 00:07:30.862 "name": "Nvme0" 00:07:30.862 }, 00:07:30.862 "method": "bdev_nvme_attach_controller" 00:07:30.862 }, 00:07:30.862 { 00:07:30.862 "method": "bdev_wait_for_examine" 00:07:30.862 } 00:07:30.862 ] 00:07:30.862 } 00:07:30.862 ] 00:07:30.862 } 00:07:30.862 [2024-11-17 08:07:35.696541] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:30.862 [2024-11-17 08:07:35.696684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61435 ] 00:07:31.121 [2024-11-17 08:07:35.879853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.121 [2024-11-17 08:07:36.024685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.381 [2024-11-17 08:07:36.271497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.640  [2024-11-17T08:07:37.590Z] Copying: 56/56 [kB] (average 18 MBps) 00:07:32.578 00:07:32.578 08:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.578 08:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:32.578 08:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:32.578 08:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:32.578 08:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:32.578 08:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:32.578 08:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:32.578 08:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:32.578 08:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:32.578 08:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.578 08:07:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.578 { 00:07:32.578 "subsystems": [ 00:07:32.578 { 00:07:32.578 "subsystem": "bdev", 00:07:32.578 "config": [ 00:07:32.578 { 00:07:32.578 "params": { 00:07:32.578 "trtype": "pcie", 00:07:32.578 "traddr": "0000:00:10.0", 00:07:32.578 "name": "Nvme0" 00:07:32.578 }, 00:07:32.578 "method": "bdev_nvme_attach_controller" 00:07:32.578 }, 00:07:32.578 { 00:07:32.578 "method": "bdev_wait_for_examine" 00:07:32.578 } 00:07:32.578 ] 00:07:32.578 } 00:07:32.578 ] 00:07:32.578 } 00:07:32.578 [2024-11-17 08:07:37.553300] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:32.578 [2024-11-17 08:07:37.553430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61468 ] 00:07:32.838 [2024-11-17 08:07:37.738999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.097 [2024-11-17 08:07:37.871595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.357 [2024-11-17 08:07:38.116134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.357  [2024-11-17T08:07:39.750Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:34.738 00:07:34.738 08:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:34.738 08:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:34.738 08:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:34.738 08:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:34.738 08:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:34.738 08:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:34.738 08:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.307 08:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:35.307 08:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:35.307 08:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:35.307 08:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.307 { 00:07:35.307 "subsystems": [ 00:07:35.307 { 00:07:35.307 "subsystem": "bdev", 00:07:35.307 "config": [ 00:07:35.307 { 00:07:35.307 "params": { 00:07:35.307 "trtype": "pcie", 00:07:35.307 "traddr": "0000:00:10.0", 00:07:35.307 "name": "Nvme0" 00:07:35.307 }, 00:07:35.307 "method": "bdev_nvme_attach_controller" 00:07:35.307 }, 00:07:35.307 { 00:07:35.307 "method": "bdev_wait_for_examine" 00:07:35.307 } 00:07:35.307 ] 00:07:35.307 } 00:07:35.307 ] 00:07:35.307 } 00:07:35.307 [2024-11-17 08:07:40.153837] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:35.307 [2024-11-17 08:07:40.153994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61499 ] 00:07:35.566 [2024-11-17 08:07:40.339340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.566 [2024-11-17 08:07:40.482882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.825 [2024-11-17 08:07:40.729973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.091  [2024-11-17T08:07:42.042Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:37.030 00:07:37.030 08:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:37.030 08:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:37.030 08:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.030 08:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.030 { 00:07:37.030 "subsystems": [ 00:07:37.030 { 00:07:37.030 "subsystem": "bdev", 00:07:37.030 "config": [ 00:07:37.030 { 00:07:37.030 "params": { 00:07:37.030 "trtype": "pcie", 00:07:37.030 "traddr": "0000:00:10.0", 00:07:37.030 "name": "Nvme0" 00:07:37.030 }, 00:07:37.030 "method": "bdev_nvme_attach_controller" 00:07:37.030 }, 00:07:37.030 { 00:07:37.030 "method": "bdev_wait_for_examine" 00:07:37.030 } 00:07:37.030 ] 00:07:37.030 } 00:07:37.030 ] 00:07:37.030 } 00:07:37.030 [2024-11-17 08:07:42.030990] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:37.031 [2024-11-17 08:07:42.031159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61530 ] 00:07:37.290 [2024-11-17 08:07:42.216820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.549 [2024-11-17 08:07:42.355689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.808 [2024-11-17 08:07:42.591846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.808  [2024-11-17T08:07:44.200Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:39.188 00:07:39.188 08:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.188 08:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:39.188 08:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:39.188 08:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:39.188 08:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:39.188 08:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:39.188 08:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:39.188 08:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:39.188 08:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:39.188 08:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.188 08:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.188 { 00:07:39.188 "subsystems": [ 00:07:39.188 { 00:07:39.188 "subsystem": "bdev", 00:07:39.188 "config": [ 00:07:39.188 { 00:07:39.188 "params": { 00:07:39.188 "trtype": "pcie", 00:07:39.188 "traddr": "0000:00:10.0", 00:07:39.188 "name": "Nvme0" 00:07:39.188 }, 00:07:39.188 "method": "bdev_nvme_attach_controller" 00:07:39.188 }, 00:07:39.188 { 00:07:39.188 "method": "bdev_wait_for_examine" 00:07:39.188 } 00:07:39.188 ] 00:07:39.188 } 00:07:39.188 ] 00:07:39.188 } 00:07:39.188 [2024-11-17 08:07:44.160808] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:39.188 [2024-11-17 08:07:44.160979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61563 ] 00:07:39.477 [2024-11-17 08:07:44.347168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.797 [2024-11-17 08:07:44.494352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.797 [2024-11-17 08:07:44.745719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.057  [2024-11-17T08:07:46.008Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:40.996 00:07:40.996 08:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:40.996 08:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:40.996 08:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:40.996 08:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:40.996 08:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:40.996 08:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:40.996 08:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:40.996 08:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.565 08:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:41.565 08:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:41.565 08:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.565 08:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.565 { 00:07:41.565 "subsystems": [ 00:07:41.565 { 00:07:41.565 "subsystem": "bdev", 00:07:41.565 "config": [ 00:07:41.565 { 00:07:41.565 "params": { 00:07:41.565 "trtype": "pcie", 00:07:41.565 "traddr": "0000:00:10.0", 00:07:41.565 "name": "Nvme0" 00:07:41.565 }, 00:07:41.565 "method": "bdev_nvme_attach_controller" 00:07:41.565 }, 00:07:41.565 { 00:07:41.565 "method": "bdev_wait_for_examine" 00:07:41.565 } 00:07:41.565 ] 00:07:41.565 } 00:07:41.565 ] 00:07:41.565 } 00:07:41.565 [2024-11-17 08:07:46.439778] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:41.565 [2024-11-17 08:07:46.439940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61600 ] 00:07:41.825 [2024-11-17 08:07:46.627424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.825 [2024-11-17 08:07:46.775975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.084 [2024-11-17 08:07:47.005911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.344  [2024-11-17T08:07:48.735Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:43.723 00:07:43.723 08:07:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:43.723 08:07:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:43.723 08:07:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.723 08:07:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.723 { 00:07:43.723 "subsystems": [ 00:07:43.723 { 00:07:43.723 "subsystem": "bdev", 00:07:43.723 "config": [ 00:07:43.723 { 00:07:43.723 "params": { 00:07:43.723 "trtype": "pcie", 00:07:43.723 "traddr": "0000:00:10.0", 00:07:43.724 "name": "Nvme0" 00:07:43.724 }, 00:07:43.724 "method": "bdev_nvme_attach_controller" 00:07:43.724 }, 00:07:43.724 { 00:07:43.724 "method": "bdev_wait_for_examine" 00:07:43.724 } 00:07:43.724 ] 00:07:43.724 } 00:07:43.724 ] 00:07:43.724 } 00:07:43.724 [2024-11-17 08:07:48.540790] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:43.724 [2024-11-17 08:07:48.540928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61631 ] 00:07:43.724 [2024-11-17 08:07:48.724493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.983 [2024-11-17 08:07:48.856114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.242 [2024-11-17 08:07:49.101724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.501  [2024-11-17T08:07:50.452Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:45.440 00:07:45.440 08:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.440 08:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:45.440 08:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:45.440 08:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:45.440 08:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:45.440 08:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:45.440 08:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:45.440 08:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:45.440 08:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:45.440 08:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.440 08:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.440 { 00:07:45.440 "subsystems": [ 00:07:45.440 { 00:07:45.440 "subsystem": "bdev", 00:07:45.440 "config": [ 00:07:45.440 { 00:07:45.440 "params": { 00:07:45.440 "trtype": "pcie", 00:07:45.440 "traddr": "0000:00:10.0", 00:07:45.440 "name": "Nvme0" 00:07:45.440 }, 00:07:45.440 "method": "bdev_nvme_attach_controller" 00:07:45.440 }, 00:07:45.440 { 00:07:45.440 "method": "bdev_wait_for_examine" 00:07:45.440 } 00:07:45.440 ] 00:07:45.440 } 00:07:45.440 ] 00:07:45.440 } 00:07:45.440 [2024-11-17 08:07:50.409308] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:45.440 [2024-11-17 08:07:50.409450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61658 ] 00:07:45.700 [2024-11-17 08:07:50.598061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.959 [2024-11-17 08:07:50.736185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.219 [2024-11-17 08:07:50.980377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.219  [2024-11-17T08:07:52.610Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:47.598 00:07:47.598 08:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:47.598 08:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:47.598 08:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:47.598 08:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:47.598 08:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:47.598 08:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:47.598 08:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.858 08:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:47.858 08:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:47.858 08:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:47.858 08:07:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.117 { 00:07:48.117 "subsystems": [ 00:07:48.117 { 00:07:48.117 "subsystem": "bdev", 00:07:48.117 "config": [ 00:07:48.117 { 00:07:48.117 "params": { 00:07:48.117 "trtype": "pcie", 00:07:48.117 "traddr": "0000:00:10.0", 00:07:48.117 "name": "Nvme0" 00:07:48.117 }, 00:07:48.117 "method": "bdev_nvme_attach_controller" 00:07:48.117 }, 00:07:48.117 { 00:07:48.117 "method": "bdev_wait_for_examine" 00:07:48.117 } 00:07:48.117 ] 00:07:48.117 } 00:07:48.117 ] 00:07:48.117 } 00:07:48.117 [2024-11-17 08:07:52.923349] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:48.117 [2024-11-17 08:07:52.923507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61695 ] 00:07:48.117 [2024-11-17 08:07:53.109694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.376 [2024-11-17 08:07:53.244680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.635 [2024-11-17 08:07:53.486933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.895  [2024-11-17T08:07:54.905Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:49.893 00:07:49.893 08:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:49.893 08:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:49.893 08:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:49.893 08:07:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.893 { 00:07:49.893 "subsystems": [ 00:07:49.893 { 00:07:49.893 "subsystem": "bdev", 00:07:49.893 "config": [ 00:07:49.893 { 00:07:49.893 "params": { 00:07:49.893 "trtype": "pcie", 00:07:49.893 "traddr": "0000:00:10.0", 00:07:49.893 "name": "Nvme0" 00:07:49.893 }, 00:07:49.893 "method": "bdev_nvme_attach_controller" 00:07:49.893 }, 00:07:49.893 { 00:07:49.893 "method": "bdev_wait_for_examine" 00:07:49.893 } 00:07:49.893 ] 00:07:49.893 } 00:07:49.893 ] 00:07:49.893 } 00:07:49.893 [2024-11-17 08:07:54.756776] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:49.893 [2024-11-17 08:07:54.756929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61726 ] 00:07:50.153 [2024-11-17 08:07:54.942435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.153 [2024-11-17 08:07:55.074388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.412 [2024-11-17 08:07:55.309395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.672  [2024-11-17T08:07:57.090Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:52.078 00:07:52.078 08:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.078 08:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:52.079 08:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:52.079 08:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:52.079 08:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:52.079 08:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:52.079 08:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:52.079 08:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:52.079 08:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:52.079 08:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:52.079 08:07:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.079 { 00:07:52.079 "subsystems": [ 00:07:52.079 { 00:07:52.079 "subsystem": "bdev", 00:07:52.079 "config": [ 00:07:52.079 { 00:07:52.079 "params": { 00:07:52.079 "trtype": "pcie", 00:07:52.079 "traddr": "0000:00:10.0", 00:07:52.079 "name": "Nvme0" 00:07:52.079 }, 00:07:52.079 "method": "bdev_nvme_attach_controller" 00:07:52.079 }, 00:07:52.079 { 00:07:52.079 "method": "bdev_wait_for_examine" 00:07:52.079 } 00:07:52.079 ] 00:07:52.079 } 00:07:52.079 ] 00:07:52.079 } 00:07:52.079 [2024-11-17 08:07:56.841851] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:52.079 [2024-11-17 08:07:56.841992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61759 ] 00:07:52.079 [2024-11-17 08:07:57.024383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.338 [2024-11-17 08:07:57.166256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.603 [2024-11-17 08:07:57.411359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.866  [2024-11-17T08:07:58.817Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:53.805 00:07:53.805 00:07:53.805 real 0m38.668s 00:07:53.805 user 0m31.100s 00:07:53.805 sys 0m21.100s 00:07:53.805 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.805 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:53.805 ************************************ 00:07:53.805 END TEST dd_rw 00:07:53.805 ************************************ 00:07:53.805 08:07:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:53.805 08:07:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.805 08:07:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.805 08:07:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:53.805 ************************************ 00:07:53.805 START TEST dd_rw_offset 00:07:53.805 ************************************ 00:07:53.805 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:53.805 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:53.805 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:53.805 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:53.805 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:53.805 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:53.806 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=ryn4q779nr010o5viuh1nbktvd1qg0sqqpyv0q8b391wxdjwqlccx5p1it5lpccfj3zoe44bjp4b1h96z3gid22kze1rw9a75geoiexxn9fitok58jnppj0rle7dhixs5oj7r1lyrie3dxbqew1cfc52o24t3uvotv9r6ruzz6c26zek2kp6nsyabo6nhtcyzfyl0kyexnij0rewwhi63vn2t3nj5hn80etgieci9033yl6zgmzhdblsdfbxdobrm0t3ccpd56ng4z700qasg7q8roqqld35c5m3ycaxyww0bhmdkb14wuncunt3q90gsxv2nvl6bud3ia1ca7vze0qhb6svlm89ganf760jtilf0za38kdh9atk33lypwjn6f6ntl3tprr1wedwbll28uc8koxn0biy0xic165yv85sobgqkero44orsoza7mcyru1uxxsv1a78r2avm595bucxwlb2fqa4erdjbxqm5pxyhukbe9uhdx3apmfeygbkzk1vnu5598f096st1kacetj0uu18sv4mzsye2wnu2o12m65hh4rfkw1yxpq3854c4ka4iriqt49qwmt3l5ti3a2fiupau0wbekss2revhg5fiib01nadync75ou4e8gh2jnt6mw98amltzvqohefyt0ejitci9mfaydrzds3uk9dttunnuip3zbkaie28gmnpu5mlvzegcmm4g8ykg5j60pzr18w2ztcpv2hs6qzxxss2ea0i1a32gjw4km16z0jfe1s6gn662ea9gy9aar1r0shliwpv4m4ieiwy0f1tiklrozjg5ez7jyrgzja4uxybmktpjkqqurdg3xznvx7hynninrxrzov8xy7ej9ssf3yuykjlj18foi9te8lunpo00pw1kvdwy328lhpoxa4rtrqmnwcaqlf8akpq4bjh4746s1ma1pn98wt43dvrs3q5o9gb2ixz3my3jsnhfqh06jfv2d9h2ol2egtm80tbiwvjc24d8my141rso9pr9cfb1eskh9zppruobth91qm0hflok7706y5u6tab7ilkf6e4xk98q46w5kxqb8hbxjog06tbx5z2v431k48l3a7m8irwdcv1r35c52yhqlwcduuclm5qmguxwgonhdvfm56q205ddy76sjhn4r3rauqhgesgopinrsmrdespb101i5guy8wc9a7hbwwdqhwjqvtzkjnpa17gorptoh7qr39w6wounew3ge74jj0huxxgec2s5ptv23hc2i1ybimf23dm23u6ehau12gm01aucp1owks4qweofkab0etbj019qfo02c1mhiy6rmf994pn57dkyxh8upgyl4ws6mig1998i22g251uxmra7j4ol1otky2h44cafyczjwz5mm32yx319jec1dx6344iuvrrtet7bjgj9uwpx9n4dfxqft6h13fyxlwqltf26fdiknjwf9vjviaus21bm5jdeqo86ug53jl5z1thr2o8ctzlvsvo1ayy17ypcadpb5zmjtdpwap5pyynr1lljqp0n0uxx34gsav1crsloooed3mwy5rdrwqnqdt3m3nr6xtkifkil99u182nqi3jswk252ausq25otxt6znx2edeac839xf1urcscssug2werzl8shv2toyycns9myupaqaewk5sng2c2ok9083wrz2veyovn4objh8w3gikzdn4cxlotxsjge2fywcg4zfs8vqlepc9sgpiv4f6mh15dhsrq3zd5duntutul7ta6oyc9yy9ukv1qmkvz7qgnxd3ql34x1olbnw99q6uvbeg9jm450ar681yp3xzhz3uoryu7ycg8p2pjs2zbgw0o639q1sk6dctsj36am5susmfmv2fzvsjiratlbmjvrx9gnwk82afriz3o34nmphor7s7wm15blbspmabr33kumrq5gr976sbh5hth8kv9r6e6cp4719np4x6ch699ut3j2f2dqibehqql89d2satdipvdcaokyg7794mkbnuioel2z79xdanz5mjy08r6olgyn2gdhlu2acklu1eapi2eubvkqkmqtuopt0a3ngk4dc22krrrz9fs9ktd2x20oq1dnyugl8eikyxq6q1gnga4o77drn0m1uhzv5z20afhccc32mra1f0isegnpnpaskgoinnpstde29qbj653eg9tewusvypx8fydjw5sq4g3fw4n1n9vet7wvlzh3p6qrk8fpzaqtvxlthxfm427m9hkfwovb2z6h6eqmn71bzq2g5mctz0l3moaqdxfzbrkgazoivdpm46af1qtfx6ga5wgn20n3bip7c9fybnav6zjsjdvjc3kyxwrcasprdwu1w64m6slgj6litqwmnb1bomhdag9udroqv46a4ty9qdm0a9l6suuq8sd3dt5z6ls72wleqcz7b0ra2p78fygo916ih8l1bgwwnjfm7s4oftp5vjv1q6ujjx4jr7sz7golfjm15ks64qx10noazz0kawltxb1b8u5zgj9qdbpr0fcmm5cktjuga7nd6jiuvpoijwyoxsdssewxvwni4ov9eu26sibzbdfy1c874lx0659tyoca9yfykzykx2ul1hpc6fwxuphmg5kvu23py3h1se934s4flp9hrsa7o9guquub171pceaaqt9r7k69tdh1b52ezvrm4clreb5qm1mqlsztrwuwuukt6u9dwqoe8qr9vfnmchpy33bbuodby5lspl2erg3vkbz3cbyilpe3rgpjmqpxaxa1g0uh8525hkfbnttr1ku3k46sujki020kktnqyec27pgxxckvz45d27x7pri8ylfrfkdra55yfhuco39fgk18kxvkesguqx90hckvaync1ze7p8k97zzvkz6ed2m305tb2czn8x1ab6d7mbxuktagbcot07p9qjpy74qvxa9m041vaywlil4v40cgm95ye9j307l9rgfalgj2cnwmzy6c0i48uquibjicn63h5zmwib4rsmuhpw3dx0lrch0y33sy8mbuzkdwws7g0o9idx834255oldaiitsmr1rxjo5k23978rd2psmjisy1arq1jxek331w3l7fkaw3laz96i2c1n6pztc6588p18r7y8dja45uo8nkaswr8tkkzvrtd1uycklh2vrrgrmw4pnhexuwhwyt7exk8950n5ahedtketbw5iuddjspw7tvczy8roqh3o19f2887ljk88z4xdkb5lm019ee67a66b61u4y5aq8e721i240gzxjgccj76sha6hssfmtzhmwr13psxp49s9k3dqec29de56kb6gcmji5b5vn4o4cs2bfytwls2se6vbludbqbrpgjjt3wk1e5xm29yv93o7bw4wcn7sdezbfd11heitua9h104pq1zixbecqwqotm667r0updz2fihlxz5zykx32aqhl28cqzks1yy96s8eih7tizvjfhj66n72zo5eth6xtljiixwbns2routhe3850wquilfovkr9mixgaz0wy2plyjlkkqef8g9tbnodqemyqglasbmtwiz2gucc9o8d1ymdhmz8c2eg5omconw1a2wknu0bantts2xwlm559mas4tll4x68h78w2rtx1d59r8l5lwo4z191itcv0u3nepd701nbi0lfw7sqoq73me2mxfpv3n7j4le9vcuxkkbrgsu8qqukx0px84zz6ezm49mpj68hirjghuy6je6rw79y1v8jssc036gmfekwx4wv0b7d9ja85b3s4rga7fe8hukx3fjo7wjd0jq2pmy98qigqnw4vj86lthtqxihhcibdted069i6pexxpnq5vnh0vrnbrxtyj6xp018usre9im4wociv17epw5k6g2z5otnbn0lpckriq2camvbrm3nwhwknwo0rz2lfg817c6kcrl3xt2ib814n6skz11egorggljls2y69jrl22lzm20kdk367h9q2aj95lwg19w4ig3f418ozyty1u4j7frc82aoyp95k24ul82x3rcpcq27h28rvjn1bf4hcyb6grow155odkfq5p3x2oivl5bard 00:07:53.806 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:53.806 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:53.806 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:53.806 08:07:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:53.806 { 00:07:53.806 "subsystems": [ 00:07:53.806 { 00:07:53.806 "subsystem": "bdev", 00:07:53.806 "config": [ 00:07:53.806 { 00:07:53.806 "params": { 00:07:53.806 "trtype": "pcie", 00:07:53.806 "traddr": "0000:00:10.0", 00:07:53.806 "name": "Nvme0" 00:07:53.806 }, 00:07:53.806 "method": "bdev_nvme_attach_controller" 00:07:53.806 }, 00:07:53.806 { 00:07:53.806 "method": "bdev_wait_for_examine" 00:07:53.806 } 00:07:53.806 ] 00:07:53.806 } 00:07:53.806 ] 00:07:53.806 } 00:07:54.065 [2024-11-17 08:07:58.820688] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:54.065 [2024-11-17 08:07:58.820829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61807 ] 00:07:54.065 [2024-11-17 08:07:59.002751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.324 [2024-11-17 08:07:59.134932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.583 [2024-11-17 08:07:59.374620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.842  [2024-11-17T08:08:00.792Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:55.780 00:07:55.780 08:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:55.780 08:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:55.780 08:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:55.780 08:08:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:56.040 { 00:07:56.040 "subsystems": [ 00:07:56.040 { 00:07:56.040 "subsystem": "bdev", 00:07:56.040 "config": [ 00:07:56.040 { 00:07:56.040 "params": { 00:07:56.040 "trtype": "pcie", 00:07:56.040 "traddr": "0000:00:10.0", 00:07:56.040 "name": "Nvme0" 00:07:56.040 }, 00:07:56.040 "method": "bdev_nvme_attach_controller" 00:07:56.040 }, 00:07:56.040 { 00:07:56.040 "method": "bdev_wait_for_examine" 00:07:56.040 } 00:07:56.040 ] 00:07:56.040 } 00:07:56.040 ] 00:07:56.040 } 00:07:56.040 [2024-11-17 08:08:00.881398] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:56.040 [2024-11-17 08:08:00.881532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61838 ] 00:07:56.298 [2024-11-17 08:08:01.062576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.298 [2024-11-17 08:08:01.195508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.557 [2024-11-17 08:08:01.430064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.816  [2024-11-17T08:08:02.767Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:57.755 00:07:57.755 08:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:57.756 08:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ ryn4q779nr010o5viuh1nbktvd1qg0sqqpyv0q8b391wxdjwqlccx5p1it5lpccfj3zoe44bjp4b1h96z3gid22kze1rw9a75geoiexxn9fitok58jnppj0rle7dhixs5oj7r1lyrie3dxbqew1cfc52o24t3uvotv9r6ruzz6c26zek2kp6nsyabo6nhtcyzfyl0kyexnij0rewwhi63vn2t3nj5hn80etgieci9033yl6zgmzhdblsdfbxdobrm0t3ccpd56ng4z700qasg7q8roqqld35c5m3ycaxyww0bhmdkb14wuncunt3q90gsxv2nvl6bud3ia1ca7vze0qhb6svlm89ganf760jtilf0za38kdh9atk33lypwjn6f6ntl3tprr1wedwbll28uc8koxn0biy0xic165yv85sobgqkero44orsoza7mcyru1uxxsv1a78r2avm595bucxwlb2fqa4erdjbxqm5pxyhukbe9uhdx3apmfeygbkzk1vnu5598f096st1kacetj0uu18sv4mzsye2wnu2o12m65hh4rfkw1yxpq3854c4ka4iriqt49qwmt3l5ti3a2fiupau0wbekss2revhg5fiib01nadync75ou4e8gh2jnt6mw98amltzvqohefyt0ejitci9mfaydrzds3uk9dttunnuip3zbkaie28gmnpu5mlvzegcmm4g8ykg5j60pzr18w2ztcpv2hs6qzxxss2ea0i1a32gjw4km16z0jfe1s6gn662ea9gy9aar1r0shliwpv4m4ieiwy0f1tiklrozjg5ez7jyrgzja4uxybmktpjkqqurdg3xznvx7hynninrxrzov8xy7ej9ssf3yuykjlj18foi9te8lunpo00pw1kvdwy328lhpoxa4rtrqmnwcaqlf8akpq4bjh4746s1ma1pn98wt43dvrs3q5o9gb2ixz3my3jsnhfqh06jfv2d9h2ol2egtm80tbiwvjc24d8my141rso9pr9cfb1eskh9zppruobth91qm0hflok7706y5u6tab7ilkf6e4xk98q46w5kxqb8hbxjog06tbx5z2v431k48l3a7m8irwdcv1r35c52yhqlwcduuclm5qmguxwgonhdvfm56q205ddy76sjhn4r3rauqhgesgopinrsmrdespb101i5guy8wc9a7hbwwdqhwjqvtzkjnpa17gorptoh7qr39w6wounew3ge74jj0huxxgec2s5ptv23hc2i1ybimf23dm23u6ehau12gm01aucp1owks4qweofkab0etbj019qfo02c1mhiy6rmf994pn57dkyxh8upgyl4ws6mig1998i22g251uxmra7j4ol1otky2h44cafyczjwz5mm32yx319jec1dx6344iuvrrtet7bjgj9uwpx9n4dfxqft6h13fyxlwqltf26fdiknjwf9vjviaus21bm5jdeqo86ug53jl5z1thr2o8ctzlvsvo1ayy17ypcadpb5zmjtdpwap5pyynr1lljqp0n0uxx34gsav1crsloooed3mwy5rdrwqnqdt3m3nr6xtkifkil99u182nqi3jswk252ausq25otxt6znx2edeac839xf1urcscssug2werzl8shv2toyycns9myupaqaewk5sng2c2ok9083wrz2veyovn4objh8w3gikzdn4cxlotxsjge2fywcg4zfs8vqlepc9sgpiv4f6mh15dhsrq3zd5duntutul7ta6oyc9yy9ukv1qmkvz7qgnxd3ql34x1olbnw99q6uvbeg9jm450ar681yp3xzhz3uoryu7ycg8p2pjs2zbgw0o639q1sk6dctsj36am5susmfmv2fzvsjiratlbmjvrx9gnwk82afriz3o34nmphor7s7wm15blbspmabr33kumrq5gr976sbh5hth8kv9r6e6cp4719np4x6ch699ut3j2f2dqibehqql89d2satdipvdcaokyg7794mkbnuioel2z79xdanz5mjy08r6olgyn2gdhlu2acklu1eapi2eubvkqkmqtuopt0a3ngk4dc22krrrz9fs9ktd2x20oq1dnyugl8eikyxq6q1gnga4o77drn0m1uhzv5z20afhccc32mra1f0isegnpnpaskgoinnpstde29qbj653eg9tewusvypx8fydjw5sq4g3fw4n1n9vet7wvlzh3p6qrk8fpzaqtvxlthxfm427m9hkfwovb2z6h6eqmn71bzq2g5mctz0l3moaqdxfzbrkgazoivdpm46af1qtfx6ga5wgn20n3bip7c9fybnav6zjsjdvjc3kyxwrcasprdwu1w64m6slgj6litqwmnb1bomhdag9udroqv46a4ty9qdm0a9l6suuq8sd3dt5z6ls72wleqcz7b0ra2p78fygo916ih8l1bgwwnjfm7s4oftp5vjv1q6ujjx4jr7sz7golfjm15ks64qx10noazz0kawltxb1b8u5zgj9qdbpr0fcmm5cktjuga7nd6jiuvpoijwyoxsdssewxvwni4ov9eu26sibzbdfy1c874lx0659tyoca9yfykzykx2ul1hpc6fwxuphmg5kvu23py3h1se934s4flp9hrsa7o9guquub171pceaaqt9r7k69tdh1b52ezvrm4clreb5qm1mqlsztrwuwuukt6u9dwqoe8qr9vfnmchpy33bbuodby5lspl2erg3vkbz3cbyilpe3rgpjmqpxaxa1g0uh8525hkfbnttr1ku3k46sujki020kktnqyec27pgxxckvz45d27x7pri8ylfrfkdra55yfhuco39fgk18kxvkesguqx90hckvaync1ze7p8k97zzvkz6ed2m305tb2czn8x1ab6d7mbxuktagbcot07p9qjpy74qvxa9m041vaywlil4v40cgm95ye9j307l9rgfalgj2cnwmzy6c0i48uquibjicn63h5zmwib4rsmuhpw3dx0lrch0y33sy8mbuzkdwws7g0o9idx834255oldaiitsmr1rxjo5k23978rd2psmjisy1arq1jxek331w3l7fkaw3laz96i2c1n6pztc6588p18r7y8dja45uo8nkaswr8tkkzvrtd1uycklh2vrrgrmw4pnhexuwhwyt7exk8950n5ahedtketbw5iuddjspw7tvczy8roqh3o19f2887ljk88z4xdkb5lm019ee67a66b61u4y5aq8e721i240gzxjgccj76sha6hssfmtzhmwr13psxp49s9k3dqec29de56kb6gcmji5b5vn4o4cs2bfytwls2se6vbludbqbrpgjjt3wk1e5xm29yv93o7bw4wcn7sdezbfd11heitua9h104pq1zixbecqwqotm667r0updz2fihlxz5zykx32aqhl28cqzks1yy96s8eih7tizvjfhj66n72zo5eth6xtljiixwbns2routhe3850wquilfovkr9mixgaz0wy2plyjlkkqef8g9tbnodqemyqglasbmtwiz2gucc9o8d1ymdhmz8c2eg5omconw1a2wknu0bantts2xwlm559mas4tll4x68h78w2rtx1d59r8l5lwo4z191itcv0u3nepd701nbi0lfw7sqoq73me2mxfpv3n7j4le9vcuxkkbrgsu8qqukx0px84zz6ezm49mpj68hirjghuy6je6rw79y1v8jssc036gmfekwx4wv0b7d9ja85b3s4rga7fe8hukx3fjo7wjd0jq2pmy98qigqnw4vj86lthtqxihhcibdted069i6pexxpnq5vnh0vrnbrxtyj6xp018usre9im4wociv17epw5k6g2z5otnbn0lpckriq2camvbrm3nwhwknwo0rz2lfg817c6kcrl3xt2ib814n6skz11egorggljls2y69jrl22lzm20kdk367h9q2aj95lwg19w4ig3f418ozyty1u4j7frc82aoyp95k24ul82x3rcpcq27h28rvjn1bf4hcyb6grow155odkfq5p3x2oivl5bard == \r\y\n\4\q\7\7\9\n\r\0\1\0\o\5\v\i\u\h\1\n\b\k\t\v\d\1\q\g\0\s\q\q\p\y\v\0\q\8\b\3\9\1\w\x\d\j\w\q\l\c\c\x\5\p\1\i\t\5\l\p\c\c\f\j\3\z\o\e\4\4\b\j\p\4\b\1\h\9\6\z\3\g\i\d\2\2\k\z\e\1\r\w\9\a\7\5\g\e\o\i\e\x\x\n\9\f\i\t\o\k\5\8\j\n\p\p\j\0\r\l\e\7\d\h\i\x\s\5\o\j\7\r\1\l\y\r\i\e\3\d\x\b\q\e\w\1\c\f\c\5\2\o\2\4\t\3\u\v\o\t\v\9\r\6\r\u\z\z\6\c\2\6\z\e\k\2\k\p\6\n\s\y\a\b\o\6\n\h\t\c\y\z\f\y\l\0\k\y\e\x\n\i\j\0\r\e\w\w\h\i\6\3\v\n\2\t\3\n\j\5\h\n\8\0\e\t\g\i\e\c\i\9\0\3\3\y\l\6\z\g\m\z\h\d\b\l\s\d\f\b\x\d\o\b\r\m\0\t\3\c\c\p\d\5\6\n\g\4\z\7\0\0\q\a\s\g\7\q\8\r\o\q\q\l\d\3\5\c\5\m\3\y\c\a\x\y\w\w\0\b\h\m\d\k\b\1\4\w\u\n\c\u\n\t\3\q\9\0\g\s\x\v\2\n\v\l\6\b\u\d\3\i\a\1\c\a\7\v\z\e\0\q\h\b\6\s\v\l\m\8\9\g\a\n\f\7\6\0\j\t\i\l\f\0\z\a\3\8\k\d\h\9\a\t\k\3\3\l\y\p\w\j\n\6\f\6\n\t\l\3\t\p\r\r\1\w\e\d\w\b\l\l\2\8\u\c\8\k\o\x\n\0\b\i\y\0\x\i\c\1\6\5\y\v\8\5\s\o\b\g\q\k\e\r\o\4\4\o\r\s\o\z\a\7\m\c\y\r\u\1\u\x\x\s\v\1\a\7\8\r\2\a\v\m\5\9\5\b\u\c\x\w\l\b\2\f\q\a\4\e\r\d\j\b\x\q\m\5\p\x\y\h\u\k\b\e\9\u\h\d\x\3\a\p\m\f\e\y\g\b\k\z\k\1\v\n\u\5\5\9\8\f\0\9\6\s\t\1\k\a\c\e\t\j\0\u\u\1\8\s\v\4\m\z\s\y\e\2\w\n\u\2\o\1\2\m\6\5\h\h\4\r\f\k\w\1\y\x\p\q\3\8\5\4\c\4\k\a\4\i\r\i\q\t\4\9\q\w\m\t\3\l\5\t\i\3\a\2\f\i\u\p\a\u\0\w\b\e\k\s\s\2\r\e\v\h\g\5\f\i\i\b\0\1\n\a\d\y\n\c\7\5\o\u\4\e\8\g\h\2\j\n\t\6\m\w\9\8\a\m\l\t\z\v\q\o\h\e\f\y\t\0\e\j\i\t\c\i\9\m\f\a\y\d\r\z\d\s\3\u\k\9\d\t\t\u\n\n\u\i\p\3\z\b\k\a\i\e\2\8\g\m\n\p\u\5\m\l\v\z\e\g\c\m\m\4\g\8\y\k\g\5\j\6\0\p\z\r\1\8\w\2\z\t\c\p\v\2\h\s\6\q\z\x\x\s\s\2\e\a\0\i\1\a\3\2\g\j\w\4\k\m\1\6\z\0\j\f\e\1\s\6\g\n\6\6\2\e\a\9\g\y\9\a\a\r\1\r\0\s\h\l\i\w\p\v\4\m\4\i\e\i\w\y\0\f\1\t\i\k\l\r\o\z\j\g\5\e\z\7\j\y\r\g\z\j\a\4\u\x\y\b\m\k\t\p\j\k\q\q\u\r\d\g\3\x\z\n\v\x\7\h\y\n\n\i\n\r\x\r\z\o\v\8\x\y\7\e\j\9\s\s\f\3\y\u\y\k\j\l\j\1\8\f\o\i\9\t\e\8\l\u\n\p\o\0\0\p\w\1\k\v\d\w\y\3\2\8\l\h\p\o\x\a\4\r\t\r\q\m\n\w\c\a\q\l\f\8\a\k\p\q\4\b\j\h\4\7\4\6\s\1\m\a\1\p\n\9\8\w\t\4\3\d\v\r\s\3\q\5\o\9\g\b\2\i\x\z\3\m\y\3\j\s\n\h\f\q\h\0\6\j\f\v\2\d\9\h\2\o\l\2\e\g\t\m\8\0\t\b\i\w\v\j\c\2\4\d\8\m\y\1\4\1\r\s\o\9\p\r\9\c\f\b\1\e\s\k\h\9\z\p\p\r\u\o\b\t\h\9\1\q\m\0\h\f\l\o\k\7\7\0\6\y\5\u\6\t\a\b\7\i\l\k\f\6\e\4\x\k\9\8\q\4\6\w\5\k\x\q\b\8\h\b\x\j\o\g\0\6\t\b\x\5\z\2\v\4\3\1\k\4\8\l\3\a\7\m\8\i\r\w\d\c\v\1\r\3\5\c\5\2\y\h\q\l\w\c\d\u\u\c\l\m\5\q\m\g\u\x\w\g\o\n\h\d\v\f\m\5\6\q\2\0\5\d\d\y\7\6\s\j\h\n\4\r\3\r\a\u\q\h\g\e\s\g\o\p\i\n\r\s\m\r\d\e\s\p\b\1\0\1\i\5\g\u\y\8\w\c\9\a\7\h\b\w\w\d\q\h\w\j\q\v\t\z\k\j\n\p\a\1\7\g\o\r\p\t\o\h\7\q\r\3\9\w\6\w\o\u\n\e\w\3\g\e\7\4\j\j\0\h\u\x\x\g\e\c\2\s\5\p\t\v\2\3\h\c\2\i\1\y\b\i\m\f\2\3\d\m\2\3\u\6\e\h\a\u\1\2\g\m\0\1\a\u\c\p\1\o\w\k\s\4\q\w\e\o\f\k\a\b\0\e\t\b\j\0\1\9\q\f\o\0\2\c\1\m\h\i\y\6\r\m\f\9\9\4\p\n\5\7\d\k\y\x\h\8\u\p\g\y\l\4\w\s\6\m\i\g\1\9\9\8\i\2\2\g\2\5\1\u\x\m\r\a\7\j\4\o\l\1\o\t\k\y\2\h\4\4\c\a\f\y\c\z\j\w\z\5\m\m\3\2\y\x\3\1\9\j\e\c\1\d\x\6\3\4\4\i\u\v\r\r\t\e\t\7\b\j\g\j\9\u\w\p\x\9\n\4\d\f\x\q\f\t\6\h\1\3\f\y\x\l\w\q\l\t\f\2\6\f\d\i\k\n\j\w\f\9\v\j\v\i\a\u\s\2\1\b\m\5\j\d\e\q\o\8\6\u\g\5\3\j\l\5\z\1\t\h\r\2\o\8\c\t\z\l\v\s\v\o\1\a\y\y\1\7\y\p\c\a\d\p\b\5\z\m\j\t\d\p\w\a\p\5\p\y\y\n\r\1\l\l\j\q\p\0\n\0\u\x\x\3\4\g\s\a\v\1\c\r\s\l\o\o\o\e\d\3\m\w\y\5\r\d\r\w\q\n\q\d\t\3\m\3\n\r\6\x\t\k\i\f\k\i\l\9\9\u\1\8\2\n\q\i\3\j\s\w\k\2\5\2\a\u\s\q\2\5\o\t\x\t\6\z\n\x\2\e\d\e\a\c\8\3\9\x\f\1\u\r\c\s\c\s\s\u\g\2\w\e\r\z\l\8\s\h\v\2\t\o\y\y\c\n\s\9\m\y\u\p\a\q\a\e\w\k\5\s\n\g\2\c\2\o\k\9\0\8\3\w\r\z\2\v\e\y\o\v\n\4\o\b\j\h\8\w\3\g\i\k\z\d\n\4\c\x\l\o\t\x\s\j\g\e\2\f\y\w\c\g\4\z\f\s\8\v\q\l\e\p\c\9\s\g\p\i\v\4\f\6\m\h\1\5\d\h\s\r\q\3\z\d\5\d\u\n\t\u\t\u\l\7\t\a\6\o\y\c\9\y\y\9\u\k\v\1\q\m\k\v\z\7\q\g\n\x\d\3\q\l\3\4\x\1\o\l\b\n\w\9\9\q\6\u\v\b\e\g\9\j\m\4\5\0\a\r\6\8\1\y\p\3\x\z\h\z\3\u\o\r\y\u\7\y\c\g\8\p\2\p\j\s\2\z\b\g\w\0\o\6\3\9\q\1\s\k\6\d\c\t\s\j\3\6\a\m\5\s\u\s\m\f\m\v\2\f\z\v\s\j\i\r\a\t\l\b\m\j\v\r\x\9\g\n\w\k\8\2\a\f\r\i\z\3\o\3\4\n\m\p\h\o\r\7\s\7\w\m\1\5\b\l\b\s\p\m\a\b\r\3\3\k\u\m\r\q\5\g\r\9\7\6\s\b\h\5\h\t\h\8\k\v\9\r\6\e\6\c\p\4\7\1\9\n\p\4\x\6\c\h\6\9\9\u\t\3\j\2\f\2\d\q\i\b\e\h\q\q\l\8\9\d\2\s\a\t\d\i\p\v\d\c\a\o\k\y\g\7\7\9\4\m\k\b\n\u\i\o\e\l\2\z\7\9\x\d\a\n\z\5\m\j\y\0\8\r\6\o\l\g\y\n\2\g\d\h\l\u\2\a\c\k\l\u\1\e\a\p\i\2\e\u\b\v\k\q\k\m\q\t\u\o\p\t\0\a\3\n\g\k\4\d\c\2\2\k\r\r\r\z\9\f\s\9\k\t\d\2\x\2\0\o\q\1\d\n\y\u\g\l\8\e\i\k\y\x\q\6\q\1\g\n\g\a\4\o\7\7\d\r\n\0\m\1\u\h\z\v\5\z\2\0\a\f\h\c\c\c\3\2\m\r\a\1\f\0\i\s\e\g\n\p\n\p\a\s\k\g\o\i\n\n\p\s\t\d\e\2\9\q\b\j\6\5\3\e\g\9\t\e\w\u\s\v\y\p\x\8\f\y\d\j\w\5\s\q\4\g\3\f\w\4\n\1\n\9\v\e\t\7\w\v\l\z\h\3\p\6\q\r\k\8\f\p\z\a\q\t\v\x\l\t\h\x\f\m\4\2\7\m\9\h\k\f\w\o\v\b\2\z\6\h\6\e\q\m\n\7\1\b\z\q\2\g\5\m\c\t\z\0\l\3\m\o\a\q\d\x\f\z\b\r\k\g\a\z\o\i\v\d\p\m\4\6\a\f\1\q\t\f\x\6\g\a\5\w\g\n\2\0\n\3\b\i\p\7\c\9\f\y\b\n\a\v\6\z\j\s\j\d\v\j\c\3\k\y\x\w\r\c\a\s\p\r\d\w\u\1\w\6\4\m\6\s\l\g\j\6\l\i\t\q\w\m\n\b\1\b\o\m\h\d\a\g\9\u\d\r\o\q\v\4\6\a\4\t\y\9\q\d\m\0\a\9\l\6\s\u\u\q\8\s\d\3\d\t\5\z\6\l\s\7\2\w\l\e\q\c\z\7\b\0\r\a\2\p\7\8\f\y\g\o\9\1\6\i\h\8\l\1\b\g\w\w\n\j\f\m\7\s\4\o\f\t\p\5\v\j\v\1\q\6\u\j\j\x\4\j\r\7\s\z\7\g\o\l\f\j\m\1\5\k\s\6\4\q\x\1\0\n\o\a\z\z\0\k\a\w\l\t\x\b\1\b\8\u\5\z\g\j\9\q\d\b\p\r\0\f\c\m\m\5\c\k\t\j\u\g\a\7\n\d\6\j\i\u\v\p\o\i\j\w\y\o\x\s\d\s\s\e\w\x\v\w\n\i\4\o\v\9\e\u\2\6\s\i\b\z\b\d\f\y\1\c\8\7\4\l\x\0\6\5\9\t\y\o\c\a\9\y\f\y\k\z\y\k\x\2\u\l\1\h\p\c\6\f\w\x\u\p\h\m\g\5\k\v\u\2\3\p\y\3\h\1\s\e\9\3\4\s\4\f\l\p\9\h\r\s\a\7\o\9\g\u\q\u\u\b\1\7\1\p\c\e\a\a\q\t\9\r\7\k\6\9\t\d\h\1\b\5\2\e\z\v\r\m\4\c\l\r\e\b\5\q\m\1\m\q\l\s\z\t\r\w\u\w\u\u\k\t\6\u\9\d\w\q\o\e\8\q\r\9\v\f\n\m\c\h\p\y\3\3\b\b\u\o\d\b\y\5\l\s\p\l\2\e\r\g\3\v\k\b\z\3\c\b\y\i\l\p\e\3\r\g\p\j\m\q\p\x\a\x\a\1\g\0\u\h\8\5\2\5\h\k\f\b\n\t\t\r\1\k\u\3\k\4\6\s\u\j\k\i\0\2\0\k\k\t\n\q\y\e\c\2\7\p\g\x\x\c\k\v\z\4\5\d\2\7\x\7\p\r\i\8\y\l\f\r\f\k\d\r\a\5\5\y\f\h\u\c\o\3\9\f\g\k\1\8\k\x\v\k\e\s\g\u\q\x\9\0\h\c\k\v\a\y\n\c\1\z\e\7\p\8\k\9\7\z\z\v\k\z\6\e\d\2\m\3\0\5\t\b\2\c\z\n\8\x\1\a\b\6\d\7\m\b\x\u\k\t\a\g\b\c\o\t\0\7\p\9\q\j\p\y\7\4\q\v\x\a\9\m\0\4\1\v\a\y\w\l\i\l\4\v\4\0\c\g\m\9\5\y\e\9\j\3\0\7\l\9\r\g\f\a\l\g\j\2\c\n\w\m\z\y\6\c\0\i\4\8\u\q\u\i\b\j\i\c\n\6\3\h\5\z\m\w\i\b\4\r\s\m\u\h\p\w\3\d\x\0\l\r\c\h\0\y\3\3\s\y\8\m\b\u\z\k\d\w\w\s\7\g\0\o\9\i\d\x\8\3\4\2\5\5\o\l\d\a\i\i\t\s\m\r\1\r\x\j\o\5\k\2\3\9\7\8\r\d\2\p\s\m\j\i\s\y\1\a\r\q\1\j\x\e\k\3\3\1\w\3\l\7\f\k\a\w\3\l\a\z\9\6\i\2\c\1\n\6\p\z\t\c\6\5\8\8\p\1\8\r\7\y\8\d\j\a\4\5\u\o\8\n\k\a\s\w\r\8\t\k\k\z\v\r\t\d\1\u\y\c\k\l\h\2\v\r\r\g\r\m\w\4\p\n\h\e\x\u\w\h\w\y\t\7\e\x\k\8\9\5\0\n\5\a\h\e\d\t\k\e\t\b\w\5\i\u\d\d\j\s\p\w\7\t\v\c\z\y\8\r\o\q\h\3\o\1\9\f\2\8\8\7\l\j\k\8\8\z\4\x\d\k\b\5\l\m\0\1\9\e\e\6\7\a\6\6\b\6\1\u\4\y\5\a\q\8\e\7\2\1\i\2\4\0\g\z\x\j\g\c\c\j\7\6\s\h\a\6\h\s\s\f\m\t\z\h\m\w\r\1\3\p\s\x\p\4\9\s\9\k\3\d\q\e\c\2\9\d\e\5\6\k\b\6\g\c\m\j\i\5\b\5\v\n\4\o\4\c\s\2\b\f\y\t\w\l\s\2\s\e\6\v\b\l\u\d\b\q\b\r\p\g\j\j\t\3\w\k\1\e\5\x\m\2\9\y\v\9\3\o\7\b\w\4\w\c\n\7\s\d\e\z\b\f\d\1\1\h\e\i\t\u\a\9\h\1\0\4\p\q\1\z\i\x\b\e\c\q\w\q\o\t\m\6\6\7\r\0\u\p\d\z\2\f\i\h\l\x\z\5\z\y\k\x\3\2\a\q\h\l\2\8\c\q\z\k\s\1\y\y\9\6\s\8\e\i\h\7\t\i\z\v\j\f\h\j\6\6\n\7\2\z\o\5\e\t\h\6\x\t\l\j\i\i\x\w\b\n\s\2\r\o\u\t\h\e\3\8\5\0\w\q\u\i\l\f\o\v\k\r\9\m\i\x\g\a\z\0\w\y\2\p\l\y\j\l\k\k\q\e\f\8\g\9\t\b\n\o\d\q\e\m\y\q\g\l\a\s\b\m\t\w\i\z\2\g\u\c\c\9\o\8\d\1\y\m\d\h\m\z\8\c\2\e\g\5\o\m\c\o\n\w\1\a\2\w\k\n\u\0\b\a\n\t\t\s\2\x\w\l\m\5\5\9\m\a\s\4\t\l\l\4\x\6\8\h\7\8\w\2\r\t\x\1\d\5\9\r\8\l\5\l\w\o\4\z\1\9\1\i\t\c\v\0\u\3\n\e\p\d\7\0\1\n\b\i\0\l\f\w\7\s\q\o\q\7\3\m\e\2\m\x\f\p\v\3\n\7\j\4\l\e\9\v\c\u\x\k\k\b\r\g\s\u\8\q\q\u\k\x\0\p\x\8\4\z\z\6\e\z\m\4\9\m\p\j\6\8\h\i\r\j\g\h\u\y\6\j\e\6\r\w\7\9\y\1\v\8\j\s\s\c\0\3\6\g\m\f\e\k\w\x\4\w\v\0\b\7\d\9\j\a\8\5\b\3\s\4\r\g\a\7\f\e\8\h\u\k\x\3\f\j\o\7\w\j\d\0\j\q\2\p\m\y\9\8\q\i\g\q\n\w\4\v\j\8\6\l\t\h\t\q\x\i\h\h\c\i\b\d\t\e\d\0\6\9\i\6\p\e\x\x\p\n\q\5\v\n\h\0\v\r\n\b\r\x\t\y\j\6\x\p\0\1\8\u\s\r\e\9\i\m\4\w\o\c\i\v\1\7\e\p\w\5\k\6\g\2\z\5\o\t\n\b\n\0\l\p\c\k\r\i\q\2\c\a\m\v\b\r\m\3\n\w\h\w\k\n\w\o\0\r\z\2\l\f\g\8\1\7\c\6\k\c\r\l\3\x\t\2\i\b\8\1\4\n\6\s\k\z\1\1\e\g\o\r\g\g\l\j\l\s\2\y\6\9\j\r\l\2\2\l\z\m\2\0\k\d\k\3\6\7\h\9\q\2\a\j\9\5\l\w\g\1\9\w\4\i\g\3\f\4\1\8\o\z\y\t\y\1\u\4\j\7\f\r\c\8\2\a\o\y\p\9\5\k\2\4\u\l\8\2\x\3\r\c\p\c\q\2\7\h\2\8\r\v\j\n\1\b\f\4\h\c\y\b\6\g\r\o\w\1\5\5\o\d\k\f\q\5\p\3\x\2\o\i\v\l\5\b\a\r\d ]] 00:07:57.756 00:07:57.756 real 0m4.051s 00:07:57.756 user 0m3.274s 00:07:57.756 sys 0m2.369s 00:07:57.756 08:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.756 08:08:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:57.756 ************************************ 00:07:57.756 END TEST dd_rw_offset 00:07:57.756 ************************************ 00:07:58.016 08:08:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:58.016 08:08:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:58.016 08:08:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:58.016 08:08:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:58.016 08:08:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:58.016 08:08:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:58.016 08:08:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:58.016 08:08:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:58.016 08:08:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:58.016 08:08:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:58.016 08:08:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:58.016 { 00:07:58.016 "subsystems": [ 00:07:58.016 { 00:07:58.016 "subsystem": "bdev", 00:07:58.016 "config": [ 00:07:58.016 { 00:07:58.016 "params": { 00:07:58.016 "trtype": "pcie", 00:07:58.016 "traddr": "0000:00:10.0", 00:07:58.016 "name": "Nvme0" 00:07:58.016 }, 00:07:58.016 "method": "bdev_nvme_attach_controller" 00:07:58.016 }, 00:07:58.016 { 00:07:58.016 "method": "bdev_wait_for_examine" 00:07:58.016 } 00:07:58.016 ] 00:07:58.016 } 00:07:58.016 ] 00:07:58.016 } 00:07:58.016 [2024-11-17 08:08:02.894442] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:58.016 [2024-11-17 08:08:02.894601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61875 ] 00:07:58.275 [2024-11-17 08:08:03.079700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.275 [2024-11-17 08:08:03.214869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.534 [2024-11-17 08:08:03.453800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.794  [2024-11-17T08:08:05.187Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:00.175 00:08:00.175 08:08:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.175 00:08:00.175 real 0m47.473s 00:08:00.175 user 0m37.879s 00:08:00.175 sys 0m25.470s 00:08:00.175 08:08:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.175 08:08:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.175 ************************************ 00:08:00.175 END TEST spdk_dd_basic_rw 00:08:00.175 ************************************ 00:08:00.175 08:08:04 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:00.175 08:08:04 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.175 08:08:04 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.175 08:08:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:00.175 ************************************ 00:08:00.175 START TEST spdk_dd_posix 00:08:00.175 ************************************ 00:08:00.175 08:08:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:00.175 * Looking for test storage... 00:08:00.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:00.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.175 --rc genhtml_branch_coverage=1 00:08:00.175 --rc genhtml_function_coverage=1 00:08:00.175 --rc genhtml_legend=1 00:08:00.175 --rc geninfo_all_blocks=1 00:08:00.175 --rc geninfo_unexecuted_blocks=1 00:08:00.175 00:08:00.175 ' 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:00.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.175 --rc genhtml_branch_coverage=1 00:08:00.175 --rc genhtml_function_coverage=1 00:08:00.175 --rc genhtml_legend=1 00:08:00.175 --rc geninfo_all_blocks=1 00:08:00.175 --rc geninfo_unexecuted_blocks=1 00:08:00.175 00:08:00.175 ' 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:00.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.175 --rc genhtml_branch_coverage=1 00:08:00.175 --rc genhtml_function_coverage=1 00:08:00.175 --rc genhtml_legend=1 00:08:00.175 --rc geninfo_all_blocks=1 00:08:00.175 --rc geninfo_unexecuted_blocks=1 00:08:00.175 00:08:00.175 ' 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:00.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.175 --rc genhtml_branch_coverage=1 00:08:00.175 --rc genhtml_function_coverage=1 00:08:00.175 --rc genhtml_legend=1 00:08:00.175 --rc geninfo_all_blocks=1 00:08:00.175 --rc geninfo_unexecuted_blocks=1 00:08:00.175 00:08:00.175 ' 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.175 08:08:05 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:00.436 * First test run, liburing in use 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:00.436 ************************************ 00:08:00.436 START TEST dd_flag_append 00:08:00.436 ************************************ 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=m4po6abqyo8hq72v4kft3ifd10jzp6dt 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=9e4p3hjlbrgii1xrrs89u2uvl3cv0z2a 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s m4po6abqyo8hq72v4kft3ifd10jzp6dt 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 9e4p3hjlbrgii1xrrs89u2uvl3cv0z2a 00:08:00.436 08:08:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:00.436 [2024-11-17 08:08:05.316083] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:00.436 [2024-11-17 08:08:05.316233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61969 ] 00:08:00.695 [2024-11-17 08:08:05.501776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.695 [2024-11-17 08:08:05.633979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.955 [2024-11-17 08:08:05.871550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.214  [2024-11-17T08:08:07.165Z] Copying: 32/32 [B] (average 31 kBps) 00:08:02.153 00:08:02.412 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 9e4p3hjlbrgii1xrrs89u2uvl3cv0z2am4po6abqyo8hq72v4kft3ifd10jzp6dt == \9\e\4\p\3\h\j\l\b\r\g\i\i\1\x\r\r\s\8\9\u\2\u\v\l\3\c\v\0\z\2\a\m\4\p\o\6\a\b\q\y\o\8\h\q\7\2\v\4\k\f\t\3\i\f\d\1\0\j\z\p\6\d\t ]] 00:08:02.412 00:08:02.412 real 0m1.992s 00:08:02.412 user 0m1.558s 00:08:02.412 sys 0m1.221s 00:08:02.412 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.412 ************************************ 00:08:02.412 END TEST dd_flag_append 00:08:02.413 ************************************ 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:02.413 ************************************ 00:08:02.413 START TEST dd_flag_directory 00:08:02.413 ************************************ 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.413 08:08:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.413 [2024-11-17 08:08:07.385576] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:02.413 [2024-11-17 08:08:07.385713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62015 ] 00:08:02.672 [2024-11-17 08:08:07.570098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.932 [2024-11-17 08:08:07.705397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.191 [2024-11-17 08:08:07.942578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.191 [2024-11-17 08:08:08.078942] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:03.191 [2024-11-17 08:08:08.079005] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:03.191 [2024-11-17 08:08:08.079032] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.130 [2024-11-17 08:08:08.968306] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:04.388 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:04.388 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:04.388 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.389 08:08:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:04.389 [2024-11-17 08:08:09.354773] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:04.389 [2024-11-17 08:08:09.354911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62038 ] 00:08:04.719 [2024-11-17 08:08:09.539085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.979 [2024-11-17 08:08:09.675185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.979 [2024-11-17 08:08:09.900426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.238 [2024-11-17 08:08:10.039504] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.238 [2024-11-17 08:08:10.039563] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.238 [2024-11-17 08:08:10.039589] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.176 [2024-11-17 08:08:10.943162] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.436 00:08:06.436 real 0m3.955s 00:08:06.436 user 0m3.076s 00:08:06.436 sys 0m0.657s 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:06.436 ************************************ 00:08:06.436 END TEST dd_flag_directory 00:08:06.436 ************************************ 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:06.436 ************************************ 00:08:06.436 START TEST dd_flag_nofollow 00:08:06.436 ************************************ 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.436 08:08:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.436 [2024-11-17 08:08:11.425436] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:06.437 [2024-11-17 08:08:11.425595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62083 ] 00:08:06.696 [2024-11-17 08:08:11.612414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.955 [2024-11-17 08:08:11.746596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.214 [2024-11-17 08:08:11.982440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.214 [2024-11-17 08:08:12.120880] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:07.214 [2024-11-17 08:08:12.120943] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:07.214 [2024-11-17 08:08:12.120970] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.154 [2024-11-17 08:08:13.012122] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:08.412 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:08.412 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.412 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:08.412 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:08.412 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:08.412 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.413 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:08.413 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:08.413 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:08.413 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.413 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.413 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.413 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.413 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.413 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.413 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.413 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.413 08:08:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:08.413 [2024-11-17 08:08:13.386873] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:08.413 [2024-11-17 08:08:13.387016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62112 ] 00:08:08.672 [2024-11-17 08:08:13.569418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.931 [2024-11-17 08:08:13.704273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.191 [2024-11-17 08:08:13.947064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.191 [2024-11-17 08:08:14.084735] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:09.191 [2024-11-17 08:08:14.084821] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:09.191 [2024-11-17 08:08:14.084866] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.129 [2024-11-17 08:08:14.982490] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:10.388 08:08:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:10.388 08:08:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:10.388 08:08:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:10.388 08:08:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:10.388 08:08:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:10.388 08:08:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:10.388 08:08:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:10.388 08:08:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:10.388 08:08:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:10.388 08:08:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:10.388 [2024-11-17 08:08:15.383316] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:10.388 [2024-11-17 08:08:15.383476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62137 ] 00:08:10.647 [2024-11-17 08:08:15.567735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.907 [2024-11-17 08:08:15.706694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.167 [2024-11-17 08:08:15.948429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.167  [2024-11-17T08:08:17.557Z] Copying: 512/512 [B] (average 500 kBps) 00:08:12.545 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ dkoceg5myj69tx32scuq4cjgctaww3ks3waiinl9c2l3ob40uw1tpgnanky4xiuo4sk35c39dmczcpruh3dpixt1d4ps7as71frcodfm616r2yruxakml49f9h1zpukrzm9l31hesfl4qfbottf5k3zbm2hev9knd76ae0nn8nf3fro9jsjkizdj9t52rlk2l9n42zxtjfxdmuqryanmcd6p8i7kcrdfe2ocrp902d4rkwjpdsfl4d9m1vlgag6b13ge3orxbq1xj22sxo67kywrrrd4cr58iuxeyj55fc6w0rml0e303d3mv58qp0z9n3o3f6smxxagx1yhjzc2v2vzeeandb001k9loeyx0e25cgrhsqbk2mh84imsq76d7nl8g4xovaifybjz9k24hdtvtdlwroyv5xwa8q9f6jbtoc09mw21csefrj8oz65u2df7xdk3azavq0pew7pp8tzqhp9k4k2q882mnwgg1k8lfuoslof22pwejjc6jp9w == \d\k\o\c\e\g\5\m\y\j\6\9\t\x\3\2\s\c\u\q\4\c\j\g\c\t\a\w\w\3\k\s\3\w\a\i\i\n\l\9\c\2\l\3\o\b\4\0\u\w\1\t\p\g\n\a\n\k\y\4\x\i\u\o\4\s\k\3\5\c\3\9\d\m\c\z\c\p\r\u\h\3\d\p\i\x\t\1\d\4\p\s\7\a\s\7\1\f\r\c\o\d\f\m\6\1\6\r\2\y\r\u\x\a\k\m\l\4\9\f\9\h\1\z\p\u\k\r\z\m\9\l\3\1\h\e\s\f\l\4\q\f\b\o\t\t\f\5\k\3\z\b\m\2\h\e\v\9\k\n\d\7\6\a\e\0\n\n\8\n\f\3\f\r\o\9\j\s\j\k\i\z\d\j\9\t\5\2\r\l\k\2\l\9\n\4\2\z\x\t\j\f\x\d\m\u\q\r\y\a\n\m\c\d\6\p\8\i\7\k\c\r\d\f\e\2\o\c\r\p\9\0\2\d\4\r\k\w\j\p\d\s\f\l\4\d\9\m\1\v\l\g\a\g\6\b\1\3\g\e\3\o\r\x\b\q\1\x\j\2\2\s\x\o\6\7\k\y\w\r\r\r\d\4\c\r\5\8\i\u\x\e\y\j\5\5\f\c\6\w\0\r\m\l\0\e\3\0\3\d\3\m\v\5\8\q\p\0\z\9\n\3\o\3\f\6\s\m\x\x\a\g\x\1\y\h\j\z\c\2\v\2\v\z\e\e\a\n\d\b\0\0\1\k\9\l\o\e\y\x\0\e\2\5\c\g\r\h\s\q\b\k\2\m\h\8\4\i\m\s\q\7\6\d\7\n\l\8\g\4\x\o\v\a\i\f\y\b\j\z\9\k\2\4\h\d\t\v\t\d\l\w\r\o\y\v\5\x\w\a\8\q\9\f\6\j\b\t\o\c\0\9\m\w\2\1\c\s\e\f\r\j\8\o\z\6\5\u\2\d\f\7\x\d\k\3\a\z\a\v\q\0\p\e\w\7\p\p\8\t\z\q\h\p\9\k\4\k\2\q\8\8\2\m\n\w\g\g\1\k\8\l\f\u\o\s\l\o\f\2\2\p\w\e\j\j\c\6\j\p\9\w ]] 00:08:12.545 00:08:12.545 real 0m5.973s 00:08:12.545 user 0m4.668s 00:08:12.545 sys 0m1.883s 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:12.545 ************************************ 00:08:12.545 END TEST dd_flag_nofollow 00:08:12.545 ************************************ 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:12.545 ************************************ 00:08:12.545 START TEST dd_flag_noatime 00:08:12.545 ************************************ 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731830896 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731830897 00:08:12.545 08:08:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:13.483 08:08:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.742 [2024-11-17 08:08:18.494068] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:13.742 [2024-11-17 08:08:18.494208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62197 ] 00:08:13.742 [2024-11-17 08:08:18.678802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.001 [2024-11-17 08:08:18.807944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.260 [2024-11-17 08:08:19.045007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.260  [2024-11-17T08:08:20.648Z] Copying: 512/512 [B] (average 500 kBps) 00:08:15.636 00:08:15.636 08:08:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.636 08:08:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731830896 )) 00:08:15.636 08:08:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.636 08:08:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731830897 )) 00:08:15.636 08:08:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.636 [2024-11-17 08:08:20.484645] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:15.636 [2024-11-17 08:08:20.484972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62228 ] 00:08:15.895 [2024-11-17 08:08:20.667715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.895 [2024-11-17 08:08:20.797435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.154 [2024-11-17 08:08:21.024310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.413  [2024-11-17T08:08:22.362Z] Copying: 512/512 [B] (average 500 kBps) 00:08:17.350 00:08:17.350 08:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.350 08:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731830901 )) 00:08:17.350 00:08:17.350 real 0m5.015s 00:08:17.350 user 0m3.114s 00:08:17.350 sys 0m2.482s 00:08:17.350 08:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.609 ************************************ 00:08:17.609 END TEST dd_flag_noatime 00:08:17.609 ************************************ 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:17.609 ************************************ 00:08:17.609 START TEST dd_flags_misc 00:08:17.609 ************************************ 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.609 08:08:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:17.609 [2024-11-17 08:08:22.553056] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:17.609 [2024-11-17 08:08:22.553183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62274 ] 00:08:17.872 [2024-11-17 08:08:22.736268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.872 [2024-11-17 08:08:22.866431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.134 [2024-11-17 08:08:23.097312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.393  [2024-11-17T08:08:24.785Z] Copying: 512/512 [B] (average 500 kBps) 00:08:19.773 00:08:19.773 08:08:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jqghfizbvu73kuf049pztusdch14fyiseg9fazecs8k8gipwckv0o75j5pyit3gmwuzrwru0veha48cacg9oql5ylws2oobhby89efpesss8oq15kbf6azarkfnvles5ul12ggstj0j8ce0hra2t9p6lab3ve9w59vaupdp8d48tr52u63h9jybb3qaj5zkf0a3bwvqrg5hx4lojf0knzhq993n8qjh0n4ro7u7r6l5bajc8772h3b65zdgk8legg7qllwp8yljk6vsstz34jyjnn6r3tiwe9s76et8jwpgog691eyl22dtneyncbkn0xdkziosssxzlujbh877yay67b5aquu2awgsm3gbu23n3oik00pqga1sgyyhupwfuiyqlc10phxhhoztht31evqhxz1dy4druaj9robn0doygbc7l2ync827k68ytt2jfbjev27p42i89uioinufqqys2yjjud5lfrub6s9osz5h4xnu1f21jinmugxfd0pc0 == \j\q\g\h\f\i\z\b\v\u\7\3\k\u\f\0\4\9\p\z\t\u\s\d\c\h\1\4\f\y\i\s\e\g\9\f\a\z\e\c\s\8\k\8\g\i\p\w\c\k\v\0\o\7\5\j\5\p\y\i\t\3\g\m\w\u\z\r\w\r\u\0\v\e\h\a\4\8\c\a\c\g\9\o\q\l\5\y\l\w\s\2\o\o\b\h\b\y\8\9\e\f\p\e\s\s\s\8\o\q\1\5\k\b\f\6\a\z\a\r\k\f\n\v\l\e\s\5\u\l\1\2\g\g\s\t\j\0\j\8\c\e\0\h\r\a\2\t\9\p\6\l\a\b\3\v\e\9\w\5\9\v\a\u\p\d\p\8\d\4\8\t\r\5\2\u\6\3\h\9\j\y\b\b\3\q\a\j\5\z\k\f\0\a\3\b\w\v\q\r\g\5\h\x\4\l\o\j\f\0\k\n\z\h\q\9\9\3\n\8\q\j\h\0\n\4\r\o\7\u\7\r\6\l\5\b\a\j\c\8\7\7\2\h\3\b\6\5\z\d\g\k\8\l\e\g\g\7\q\l\l\w\p\8\y\l\j\k\6\v\s\s\t\z\3\4\j\y\j\n\n\6\r\3\t\i\w\e\9\s\7\6\e\t\8\j\w\p\g\o\g\6\9\1\e\y\l\2\2\d\t\n\e\y\n\c\b\k\n\0\x\d\k\z\i\o\s\s\s\x\z\l\u\j\b\h\8\7\7\y\a\y\6\7\b\5\a\q\u\u\2\a\w\g\s\m\3\g\b\u\2\3\n\3\o\i\k\0\0\p\q\g\a\1\s\g\y\y\h\u\p\w\f\u\i\y\q\l\c\1\0\p\h\x\h\h\o\z\t\h\t\3\1\e\v\q\h\x\z\1\d\y\4\d\r\u\a\j\9\r\o\b\n\0\d\o\y\g\b\c\7\l\2\y\n\c\8\2\7\k\6\8\y\t\t\2\j\f\b\j\e\v\2\7\p\4\2\i\8\9\u\i\o\i\n\u\f\q\q\y\s\2\y\j\j\u\d\5\l\f\r\u\b\6\s\9\o\s\z\5\h\4\x\n\u\1\f\2\1\j\i\n\m\u\g\x\f\d\0\p\c\0 ]] 00:08:19.773 08:08:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.773 08:08:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:19.773 [2024-11-17 08:08:24.516180] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:19.773 [2024-11-17 08:08:24.516472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62295 ] 00:08:19.773 [2024-11-17 08:08:24.699774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.032 [2024-11-17 08:08:24.831757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.291 [2024-11-17 08:08:25.070416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.291  [2024-11-17T08:08:26.681Z] Copying: 512/512 [B] (average 500 kBps) 00:08:21.669 00:08:21.669 08:08:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jqghfizbvu73kuf049pztusdch14fyiseg9fazecs8k8gipwckv0o75j5pyit3gmwuzrwru0veha48cacg9oql5ylws2oobhby89efpesss8oq15kbf6azarkfnvles5ul12ggstj0j8ce0hra2t9p6lab3ve9w59vaupdp8d48tr52u63h9jybb3qaj5zkf0a3bwvqrg5hx4lojf0knzhq993n8qjh0n4ro7u7r6l5bajc8772h3b65zdgk8legg7qllwp8yljk6vsstz34jyjnn6r3tiwe9s76et8jwpgog691eyl22dtneyncbkn0xdkziosssxzlujbh877yay67b5aquu2awgsm3gbu23n3oik00pqga1sgyyhupwfuiyqlc10phxhhoztht31evqhxz1dy4druaj9robn0doygbc7l2ync827k68ytt2jfbjev27p42i89uioinufqqys2yjjud5lfrub6s9osz5h4xnu1f21jinmugxfd0pc0 == \j\q\g\h\f\i\z\b\v\u\7\3\k\u\f\0\4\9\p\z\t\u\s\d\c\h\1\4\f\y\i\s\e\g\9\f\a\z\e\c\s\8\k\8\g\i\p\w\c\k\v\0\o\7\5\j\5\p\y\i\t\3\g\m\w\u\z\r\w\r\u\0\v\e\h\a\4\8\c\a\c\g\9\o\q\l\5\y\l\w\s\2\o\o\b\h\b\y\8\9\e\f\p\e\s\s\s\8\o\q\1\5\k\b\f\6\a\z\a\r\k\f\n\v\l\e\s\5\u\l\1\2\g\g\s\t\j\0\j\8\c\e\0\h\r\a\2\t\9\p\6\l\a\b\3\v\e\9\w\5\9\v\a\u\p\d\p\8\d\4\8\t\r\5\2\u\6\3\h\9\j\y\b\b\3\q\a\j\5\z\k\f\0\a\3\b\w\v\q\r\g\5\h\x\4\l\o\j\f\0\k\n\z\h\q\9\9\3\n\8\q\j\h\0\n\4\r\o\7\u\7\r\6\l\5\b\a\j\c\8\7\7\2\h\3\b\6\5\z\d\g\k\8\l\e\g\g\7\q\l\l\w\p\8\y\l\j\k\6\v\s\s\t\z\3\4\j\y\j\n\n\6\r\3\t\i\w\e\9\s\7\6\e\t\8\j\w\p\g\o\g\6\9\1\e\y\l\2\2\d\t\n\e\y\n\c\b\k\n\0\x\d\k\z\i\o\s\s\s\x\z\l\u\j\b\h\8\7\7\y\a\y\6\7\b\5\a\q\u\u\2\a\w\g\s\m\3\g\b\u\2\3\n\3\o\i\k\0\0\p\q\g\a\1\s\g\y\y\h\u\p\w\f\u\i\y\q\l\c\1\0\p\h\x\h\h\o\z\t\h\t\3\1\e\v\q\h\x\z\1\d\y\4\d\r\u\a\j\9\r\o\b\n\0\d\o\y\g\b\c\7\l\2\y\n\c\8\2\7\k\6\8\y\t\t\2\j\f\b\j\e\v\2\7\p\4\2\i\8\9\u\i\o\i\n\u\f\q\q\y\s\2\y\j\j\u\d\5\l\f\r\u\b\6\s\9\o\s\z\5\h\4\x\n\u\1\f\2\1\j\i\n\m\u\g\x\f\d\0\p\c\0 ]] 00:08:21.669 08:08:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:21.669 08:08:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:21.669 [2024-11-17 08:08:26.490567] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:21.669 [2024-11-17 08:08:26.490696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62317 ] 00:08:21.669 [2024-11-17 08:08:26.677473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.929 [2024-11-17 08:08:26.811055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.188 [2024-11-17 08:08:27.037820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.188  [2024-11-17T08:08:28.578Z] Copying: 512/512 [B] (average 125 kBps) 00:08:23.566 00:08:23.567 08:08:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jqghfizbvu73kuf049pztusdch14fyiseg9fazecs8k8gipwckv0o75j5pyit3gmwuzrwru0veha48cacg9oql5ylws2oobhby89efpesss8oq15kbf6azarkfnvles5ul12ggstj0j8ce0hra2t9p6lab3ve9w59vaupdp8d48tr52u63h9jybb3qaj5zkf0a3bwvqrg5hx4lojf0knzhq993n8qjh0n4ro7u7r6l5bajc8772h3b65zdgk8legg7qllwp8yljk6vsstz34jyjnn6r3tiwe9s76et8jwpgog691eyl22dtneyncbkn0xdkziosssxzlujbh877yay67b5aquu2awgsm3gbu23n3oik00pqga1sgyyhupwfuiyqlc10phxhhoztht31evqhxz1dy4druaj9robn0doygbc7l2ync827k68ytt2jfbjev27p42i89uioinufqqys2yjjud5lfrub6s9osz5h4xnu1f21jinmugxfd0pc0 == \j\q\g\h\f\i\z\b\v\u\7\3\k\u\f\0\4\9\p\z\t\u\s\d\c\h\1\4\f\y\i\s\e\g\9\f\a\z\e\c\s\8\k\8\g\i\p\w\c\k\v\0\o\7\5\j\5\p\y\i\t\3\g\m\w\u\z\r\w\r\u\0\v\e\h\a\4\8\c\a\c\g\9\o\q\l\5\y\l\w\s\2\o\o\b\h\b\y\8\9\e\f\p\e\s\s\s\8\o\q\1\5\k\b\f\6\a\z\a\r\k\f\n\v\l\e\s\5\u\l\1\2\g\g\s\t\j\0\j\8\c\e\0\h\r\a\2\t\9\p\6\l\a\b\3\v\e\9\w\5\9\v\a\u\p\d\p\8\d\4\8\t\r\5\2\u\6\3\h\9\j\y\b\b\3\q\a\j\5\z\k\f\0\a\3\b\w\v\q\r\g\5\h\x\4\l\o\j\f\0\k\n\z\h\q\9\9\3\n\8\q\j\h\0\n\4\r\o\7\u\7\r\6\l\5\b\a\j\c\8\7\7\2\h\3\b\6\5\z\d\g\k\8\l\e\g\g\7\q\l\l\w\p\8\y\l\j\k\6\v\s\s\t\z\3\4\j\y\j\n\n\6\r\3\t\i\w\e\9\s\7\6\e\t\8\j\w\p\g\o\g\6\9\1\e\y\l\2\2\d\t\n\e\y\n\c\b\k\n\0\x\d\k\z\i\o\s\s\s\x\z\l\u\j\b\h\8\7\7\y\a\y\6\7\b\5\a\q\u\u\2\a\w\g\s\m\3\g\b\u\2\3\n\3\o\i\k\0\0\p\q\g\a\1\s\g\y\y\h\u\p\w\f\u\i\y\q\l\c\1\0\p\h\x\h\h\o\z\t\h\t\3\1\e\v\q\h\x\z\1\d\y\4\d\r\u\a\j\9\r\o\b\n\0\d\o\y\g\b\c\7\l\2\y\n\c\8\2\7\k\6\8\y\t\t\2\j\f\b\j\e\v\2\7\p\4\2\i\8\9\u\i\o\i\n\u\f\q\q\y\s\2\y\j\j\u\d\5\l\f\r\u\b\6\s\9\o\s\z\5\h\4\x\n\u\1\f\2\1\j\i\n\m\u\g\x\f\d\0\p\c\0 ]] 00:08:23.567 08:08:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.567 08:08:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:23.567 [2024-11-17 08:08:28.451721] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:23.567 [2024-11-17 08:08:28.451851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62344 ] 00:08:23.826 [2024-11-17 08:08:28.634283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.826 [2024-11-17 08:08:28.766904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.141 [2024-11-17 08:08:28.998596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.141  [2024-11-17T08:08:30.531Z] Copying: 512/512 [B] (average 250 kBps) 00:08:25.519 00:08:25.519 08:08:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jqghfizbvu73kuf049pztusdch14fyiseg9fazecs8k8gipwckv0o75j5pyit3gmwuzrwru0veha48cacg9oql5ylws2oobhby89efpesss8oq15kbf6azarkfnvles5ul12ggstj0j8ce0hra2t9p6lab3ve9w59vaupdp8d48tr52u63h9jybb3qaj5zkf0a3bwvqrg5hx4lojf0knzhq993n8qjh0n4ro7u7r6l5bajc8772h3b65zdgk8legg7qllwp8yljk6vsstz34jyjnn6r3tiwe9s76et8jwpgog691eyl22dtneyncbkn0xdkziosssxzlujbh877yay67b5aquu2awgsm3gbu23n3oik00pqga1sgyyhupwfuiyqlc10phxhhoztht31evqhxz1dy4druaj9robn0doygbc7l2ync827k68ytt2jfbjev27p42i89uioinufqqys2yjjud5lfrub6s9osz5h4xnu1f21jinmugxfd0pc0 == \j\q\g\h\f\i\z\b\v\u\7\3\k\u\f\0\4\9\p\z\t\u\s\d\c\h\1\4\f\y\i\s\e\g\9\f\a\z\e\c\s\8\k\8\g\i\p\w\c\k\v\0\o\7\5\j\5\p\y\i\t\3\g\m\w\u\z\r\w\r\u\0\v\e\h\a\4\8\c\a\c\g\9\o\q\l\5\y\l\w\s\2\o\o\b\h\b\y\8\9\e\f\p\e\s\s\s\8\o\q\1\5\k\b\f\6\a\z\a\r\k\f\n\v\l\e\s\5\u\l\1\2\g\g\s\t\j\0\j\8\c\e\0\h\r\a\2\t\9\p\6\l\a\b\3\v\e\9\w\5\9\v\a\u\p\d\p\8\d\4\8\t\r\5\2\u\6\3\h\9\j\y\b\b\3\q\a\j\5\z\k\f\0\a\3\b\w\v\q\r\g\5\h\x\4\l\o\j\f\0\k\n\z\h\q\9\9\3\n\8\q\j\h\0\n\4\r\o\7\u\7\r\6\l\5\b\a\j\c\8\7\7\2\h\3\b\6\5\z\d\g\k\8\l\e\g\g\7\q\l\l\w\p\8\y\l\j\k\6\v\s\s\t\z\3\4\j\y\j\n\n\6\r\3\t\i\w\e\9\s\7\6\e\t\8\j\w\p\g\o\g\6\9\1\e\y\l\2\2\d\t\n\e\y\n\c\b\k\n\0\x\d\k\z\i\o\s\s\s\x\z\l\u\j\b\h\8\7\7\y\a\y\6\7\b\5\a\q\u\u\2\a\w\g\s\m\3\g\b\u\2\3\n\3\o\i\k\0\0\p\q\g\a\1\s\g\y\y\h\u\p\w\f\u\i\y\q\l\c\1\0\p\h\x\h\h\o\z\t\h\t\3\1\e\v\q\h\x\z\1\d\y\4\d\r\u\a\j\9\r\o\b\n\0\d\o\y\g\b\c\7\l\2\y\n\c\8\2\7\k\6\8\y\t\t\2\j\f\b\j\e\v\2\7\p\4\2\i\8\9\u\i\o\i\n\u\f\q\q\y\s\2\y\j\j\u\d\5\l\f\r\u\b\6\s\9\o\s\z\5\h\4\x\n\u\1\f\2\1\j\i\n\m\u\g\x\f\d\0\p\c\0 ]] 00:08:25.519 08:08:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:25.519 08:08:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:25.519 08:08:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:25.519 08:08:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:25.519 08:08:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.519 08:08:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:25.519 [2024-11-17 08:08:30.435693] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:25.519 [2024-11-17 08:08:30.436020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62371 ] 00:08:25.778 [2024-11-17 08:08:30.619918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.778 [2024-11-17 08:08:30.753836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.038 [2024-11-17 08:08:30.991402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.297  [2024-11-17T08:08:32.688Z] Copying: 512/512 [B] (average 500 kBps) 00:08:27.676 00:08:27.676 08:08:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xnmkkbt1gg9ngnuoy88lkuarpaipdgduc99simy14asii3hqucmw2v8uwryfh6k2nrfjf0w6gzfzq2g4o4tmfp3pemb60m1ivd8fgt0pg5c2jr8dsgq9mjtl7cqv3lb5g5b80xhybahol1uqn6eo6pqtzpf0lxudya4bkbmob2uq0ezuejgkx3iyjhqpmmcxgcijj45d9bisaw9iwwk6lgdaps49dh86x4q8674qg9gfxf8huu2th7xtm922av4za2b8qh93cidt29ms3fd6zunlxvbnd410ua4ixkzswwchl58kng3s4fp55r3avecef0jxdi6adchep7k8fb3ap1zmqyaqhk9puovkfqya8imepsbekjjch9wjdoo6ifdeurd6mot8tg76xwjsu50mz1o9162vic6c5px00mdguuq7w0ckvvesq8l7wluifw3irtup552sv5nagckk3rpdklsylitf5vkqrhxb35ge37vul39fvw38lmkl1pgen9zu == \x\n\m\k\k\b\t\1\g\g\9\n\g\n\u\o\y\8\8\l\k\u\a\r\p\a\i\p\d\g\d\u\c\9\9\s\i\m\y\1\4\a\s\i\i\3\h\q\u\c\m\w\2\v\8\u\w\r\y\f\h\6\k\2\n\r\f\j\f\0\w\6\g\z\f\z\q\2\g\4\o\4\t\m\f\p\3\p\e\m\b\6\0\m\1\i\v\d\8\f\g\t\0\p\g\5\c\2\j\r\8\d\s\g\q\9\m\j\t\l\7\c\q\v\3\l\b\5\g\5\b\8\0\x\h\y\b\a\h\o\l\1\u\q\n\6\e\o\6\p\q\t\z\p\f\0\l\x\u\d\y\a\4\b\k\b\m\o\b\2\u\q\0\e\z\u\e\j\g\k\x\3\i\y\j\h\q\p\m\m\c\x\g\c\i\j\j\4\5\d\9\b\i\s\a\w\9\i\w\w\k\6\l\g\d\a\p\s\4\9\d\h\8\6\x\4\q\8\6\7\4\q\g\9\g\f\x\f\8\h\u\u\2\t\h\7\x\t\m\9\2\2\a\v\4\z\a\2\b\8\q\h\9\3\c\i\d\t\2\9\m\s\3\f\d\6\z\u\n\l\x\v\b\n\d\4\1\0\u\a\4\i\x\k\z\s\w\w\c\h\l\5\8\k\n\g\3\s\4\f\p\5\5\r\3\a\v\e\c\e\f\0\j\x\d\i\6\a\d\c\h\e\p\7\k\8\f\b\3\a\p\1\z\m\q\y\a\q\h\k\9\p\u\o\v\k\f\q\y\a\8\i\m\e\p\s\b\e\k\j\j\c\h\9\w\j\d\o\o\6\i\f\d\e\u\r\d\6\m\o\t\8\t\g\7\6\x\w\j\s\u\5\0\m\z\1\o\9\1\6\2\v\i\c\6\c\5\p\x\0\0\m\d\g\u\u\q\7\w\0\c\k\v\v\e\s\q\8\l\7\w\l\u\i\f\w\3\i\r\t\u\p\5\5\2\s\v\5\n\a\g\c\k\k\3\r\p\d\k\l\s\y\l\i\t\f\5\v\k\q\r\h\x\b\3\5\g\e\3\7\v\u\l\3\9\f\v\w\3\8\l\m\k\l\1\p\g\e\n\9\z\u ]] 00:08:27.676 08:08:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.676 08:08:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:27.676 [2024-11-17 08:08:32.412649] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:27.676 [2024-11-17 08:08:32.412795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62398 ] 00:08:27.676 [2024-11-17 08:08:32.597098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.935 [2024-11-17 08:08:32.738069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.194 [2024-11-17 08:08:32.984219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.194  [2024-11-17T08:08:34.601Z] Copying: 512/512 [B] (average 500 kBps) 00:08:29.589 00:08:29.589 08:08:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xnmkkbt1gg9ngnuoy88lkuarpaipdgduc99simy14asii3hqucmw2v8uwryfh6k2nrfjf0w6gzfzq2g4o4tmfp3pemb60m1ivd8fgt0pg5c2jr8dsgq9mjtl7cqv3lb5g5b80xhybahol1uqn6eo6pqtzpf0lxudya4bkbmob2uq0ezuejgkx3iyjhqpmmcxgcijj45d9bisaw9iwwk6lgdaps49dh86x4q8674qg9gfxf8huu2th7xtm922av4za2b8qh93cidt29ms3fd6zunlxvbnd410ua4ixkzswwchl58kng3s4fp55r3avecef0jxdi6adchep7k8fb3ap1zmqyaqhk9puovkfqya8imepsbekjjch9wjdoo6ifdeurd6mot8tg76xwjsu50mz1o9162vic6c5px00mdguuq7w0ckvvesq8l7wluifw3irtup552sv5nagckk3rpdklsylitf5vkqrhxb35ge37vul39fvw38lmkl1pgen9zu == \x\n\m\k\k\b\t\1\g\g\9\n\g\n\u\o\y\8\8\l\k\u\a\r\p\a\i\p\d\g\d\u\c\9\9\s\i\m\y\1\4\a\s\i\i\3\h\q\u\c\m\w\2\v\8\u\w\r\y\f\h\6\k\2\n\r\f\j\f\0\w\6\g\z\f\z\q\2\g\4\o\4\t\m\f\p\3\p\e\m\b\6\0\m\1\i\v\d\8\f\g\t\0\p\g\5\c\2\j\r\8\d\s\g\q\9\m\j\t\l\7\c\q\v\3\l\b\5\g\5\b\8\0\x\h\y\b\a\h\o\l\1\u\q\n\6\e\o\6\p\q\t\z\p\f\0\l\x\u\d\y\a\4\b\k\b\m\o\b\2\u\q\0\e\z\u\e\j\g\k\x\3\i\y\j\h\q\p\m\m\c\x\g\c\i\j\j\4\5\d\9\b\i\s\a\w\9\i\w\w\k\6\l\g\d\a\p\s\4\9\d\h\8\6\x\4\q\8\6\7\4\q\g\9\g\f\x\f\8\h\u\u\2\t\h\7\x\t\m\9\2\2\a\v\4\z\a\2\b\8\q\h\9\3\c\i\d\t\2\9\m\s\3\f\d\6\z\u\n\l\x\v\b\n\d\4\1\0\u\a\4\i\x\k\z\s\w\w\c\h\l\5\8\k\n\g\3\s\4\f\p\5\5\r\3\a\v\e\c\e\f\0\j\x\d\i\6\a\d\c\h\e\p\7\k\8\f\b\3\a\p\1\z\m\q\y\a\q\h\k\9\p\u\o\v\k\f\q\y\a\8\i\m\e\p\s\b\e\k\j\j\c\h\9\w\j\d\o\o\6\i\f\d\e\u\r\d\6\m\o\t\8\t\g\7\6\x\w\j\s\u\5\0\m\z\1\o\9\1\6\2\v\i\c\6\c\5\p\x\0\0\m\d\g\u\u\q\7\w\0\c\k\v\v\e\s\q\8\l\7\w\l\u\i\f\w\3\i\r\t\u\p\5\5\2\s\v\5\n\a\g\c\k\k\3\r\p\d\k\l\s\y\l\i\t\f\5\v\k\q\r\h\x\b\3\5\g\e\3\7\v\u\l\3\9\f\v\w\3\8\l\m\k\l\1\p\g\e\n\9\z\u ]] 00:08:29.589 08:08:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:29.589 08:08:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:29.589 [2024-11-17 08:08:34.434381] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:29.589 [2024-11-17 08:08:34.434545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62425 ] 00:08:29.849 [2024-11-17 08:08:34.616652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.849 [2024-11-17 08:08:34.755035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.108 [2024-11-17 08:08:34.994738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.366  [2024-11-17T08:08:36.317Z] Copying: 512/512 [B] (average 500 kBps) 00:08:31.305 00:08:31.564 08:08:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xnmkkbt1gg9ngnuoy88lkuarpaipdgduc99simy14asii3hqucmw2v8uwryfh6k2nrfjf0w6gzfzq2g4o4tmfp3pemb60m1ivd8fgt0pg5c2jr8dsgq9mjtl7cqv3lb5g5b80xhybahol1uqn6eo6pqtzpf0lxudya4bkbmob2uq0ezuejgkx3iyjhqpmmcxgcijj45d9bisaw9iwwk6lgdaps49dh86x4q8674qg9gfxf8huu2th7xtm922av4za2b8qh93cidt29ms3fd6zunlxvbnd410ua4ixkzswwchl58kng3s4fp55r3avecef0jxdi6adchep7k8fb3ap1zmqyaqhk9puovkfqya8imepsbekjjch9wjdoo6ifdeurd6mot8tg76xwjsu50mz1o9162vic6c5px00mdguuq7w0ckvvesq8l7wluifw3irtup552sv5nagckk3rpdklsylitf5vkqrhxb35ge37vul39fvw38lmkl1pgen9zu == \x\n\m\k\k\b\t\1\g\g\9\n\g\n\u\o\y\8\8\l\k\u\a\r\p\a\i\p\d\g\d\u\c\9\9\s\i\m\y\1\4\a\s\i\i\3\h\q\u\c\m\w\2\v\8\u\w\r\y\f\h\6\k\2\n\r\f\j\f\0\w\6\g\z\f\z\q\2\g\4\o\4\t\m\f\p\3\p\e\m\b\6\0\m\1\i\v\d\8\f\g\t\0\p\g\5\c\2\j\r\8\d\s\g\q\9\m\j\t\l\7\c\q\v\3\l\b\5\g\5\b\8\0\x\h\y\b\a\h\o\l\1\u\q\n\6\e\o\6\p\q\t\z\p\f\0\l\x\u\d\y\a\4\b\k\b\m\o\b\2\u\q\0\e\z\u\e\j\g\k\x\3\i\y\j\h\q\p\m\m\c\x\g\c\i\j\j\4\5\d\9\b\i\s\a\w\9\i\w\w\k\6\l\g\d\a\p\s\4\9\d\h\8\6\x\4\q\8\6\7\4\q\g\9\g\f\x\f\8\h\u\u\2\t\h\7\x\t\m\9\2\2\a\v\4\z\a\2\b\8\q\h\9\3\c\i\d\t\2\9\m\s\3\f\d\6\z\u\n\l\x\v\b\n\d\4\1\0\u\a\4\i\x\k\z\s\w\w\c\h\l\5\8\k\n\g\3\s\4\f\p\5\5\r\3\a\v\e\c\e\f\0\j\x\d\i\6\a\d\c\h\e\p\7\k\8\f\b\3\a\p\1\z\m\q\y\a\q\h\k\9\p\u\o\v\k\f\q\y\a\8\i\m\e\p\s\b\e\k\j\j\c\h\9\w\j\d\o\o\6\i\f\d\e\u\r\d\6\m\o\t\8\t\g\7\6\x\w\j\s\u\5\0\m\z\1\o\9\1\6\2\v\i\c\6\c\5\p\x\0\0\m\d\g\u\u\q\7\w\0\c\k\v\v\e\s\q\8\l\7\w\l\u\i\f\w\3\i\r\t\u\p\5\5\2\s\v\5\n\a\g\c\k\k\3\r\p\d\k\l\s\y\l\i\t\f\5\v\k\q\r\h\x\b\3\5\g\e\3\7\v\u\l\3\9\f\v\w\3\8\l\m\k\l\1\p\g\e\n\9\z\u ]] 00:08:31.564 08:08:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:31.564 08:08:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:31.564 [2024-11-17 08:08:36.446002] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:31.564 [2024-11-17 08:08:36.446146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62452 ] 00:08:31.823 [2024-11-17 08:08:36.634068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.823 [2024-11-17 08:08:36.769438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.082 [2024-11-17 08:08:37.011021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.341  [2024-11-17T08:08:38.733Z] Copying: 512/512 [B] (average 166 kBps) 00:08:33.721 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xnmkkbt1gg9ngnuoy88lkuarpaipdgduc99simy14asii3hqucmw2v8uwryfh6k2nrfjf0w6gzfzq2g4o4tmfp3pemb60m1ivd8fgt0pg5c2jr8dsgq9mjtl7cqv3lb5g5b80xhybahol1uqn6eo6pqtzpf0lxudya4bkbmob2uq0ezuejgkx3iyjhqpmmcxgcijj45d9bisaw9iwwk6lgdaps49dh86x4q8674qg9gfxf8huu2th7xtm922av4za2b8qh93cidt29ms3fd6zunlxvbnd410ua4ixkzswwchl58kng3s4fp55r3avecef0jxdi6adchep7k8fb3ap1zmqyaqhk9puovkfqya8imepsbekjjch9wjdoo6ifdeurd6mot8tg76xwjsu50mz1o9162vic6c5px00mdguuq7w0ckvvesq8l7wluifw3irtup552sv5nagckk3rpdklsylitf5vkqrhxb35ge37vul39fvw38lmkl1pgen9zu == \x\n\m\k\k\b\t\1\g\g\9\n\g\n\u\o\y\8\8\l\k\u\a\r\p\a\i\p\d\g\d\u\c\9\9\s\i\m\y\1\4\a\s\i\i\3\h\q\u\c\m\w\2\v\8\u\w\r\y\f\h\6\k\2\n\r\f\j\f\0\w\6\g\z\f\z\q\2\g\4\o\4\t\m\f\p\3\p\e\m\b\6\0\m\1\i\v\d\8\f\g\t\0\p\g\5\c\2\j\r\8\d\s\g\q\9\m\j\t\l\7\c\q\v\3\l\b\5\g\5\b\8\0\x\h\y\b\a\h\o\l\1\u\q\n\6\e\o\6\p\q\t\z\p\f\0\l\x\u\d\y\a\4\b\k\b\m\o\b\2\u\q\0\e\z\u\e\j\g\k\x\3\i\y\j\h\q\p\m\m\c\x\g\c\i\j\j\4\5\d\9\b\i\s\a\w\9\i\w\w\k\6\l\g\d\a\p\s\4\9\d\h\8\6\x\4\q\8\6\7\4\q\g\9\g\f\x\f\8\h\u\u\2\t\h\7\x\t\m\9\2\2\a\v\4\z\a\2\b\8\q\h\9\3\c\i\d\t\2\9\m\s\3\f\d\6\z\u\n\l\x\v\b\n\d\4\1\0\u\a\4\i\x\k\z\s\w\w\c\h\l\5\8\k\n\g\3\s\4\f\p\5\5\r\3\a\v\e\c\e\f\0\j\x\d\i\6\a\d\c\h\e\p\7\k\8\f\b\3\a\p\1\z\m\q\y\a\q\h\k\9\p\u\o\v\k\f\q\y\a\8\i\m\e\p\s\b\e\k\j\j\c\h\9\w\j\d\o\o\6\i\f\d\e\u\r\d\6\m\o\t\8\t\g\7\6\x\w\j\s\u\5\0\m\z\1\o\9\1\6\2\v\i\c\6\c\5\p\x\0\0\m\d\g\u\u\q\7\w\0\c\k\v\v\e\s\q\8\l\7\w\l\u\i\f\w\3\i\r\t\u\p\5\5\2\s\v\5\n\a\g\c\k\k\3\r\p\d\k\l\s\y\l\i\t\f\5\v\k\q\r\h\x\b\3\5\g\e\3\7\v\u\l\3\9\f\v\w\3\8\l\m\k\l\1\p\g\e\n\9\z\u ]] 00:08:33.721 00:08:33.721 real 0m15.936s 00:08:33.721 user 0m12.572s 00:08:33.721 sys 0m9.780s 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.721 ************************************ 00:08:33.721 END TEST dd_flags_misc 00:08:33.721 ************************************ 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:33.721 * Second test run, disabling liburing, forcing AIO 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 ************************************ 00:08:33.721 START TEST dd_flag_append_forced_aio 00:08:33.721 ************************************ 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=3nyo6hyl898hoq6yat9i7eggdsm8x7tq 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=knfk36hfqwc5v41nn7lpborkdtjgyb48 00:08:33.722 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 3nyo6hyl898hoq6yat9i7eggdsm8x7tq 00:08:33.722 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s knfk36hfqwc5v41nn7lpborkdtjgyb48 00:08:33.722 08:08:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:33.722 [2024-11-17 08:08:38.563081] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:33.722 [2024-11-17 08:08:38.563220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62498 ] 00:08:33.981 [2024-11-17 08:08:38.750906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.981 [2024-11-17 08:08:38.883768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.240 [2024-11-17 08:08:39.122508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.240  [2024-11-17T08:08:40.644Z] Copying: 32/32 [B] (average 31 kBps) 00:08:35.632 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ knfk36hfqwc5v41nn7lpborkdtjgyb483nyo6hyl898hoq6yat9i7eggdsm8x7tq == \k\n\f\k\3\6\h\f\q\w\c\5\v\4\1\n\n\7\l\p\b\o\r\k\d\t\j\g\y\b\4\8\3\n\y\o\6\h\y\l\8\9\8\h\o\q\6\y\a\t\9\i\7\e\g\g\d\s\m\8\x\7\t\q ]] 00:08:35.632 00:08:35.632 real 0m2.008s 00:08:35.632 user 0m1.555s 00:08:35.632 sys 0m0.327s 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.632 ************************************ 00:08:35.632 END TEST dd_flag_append_forced_aio 00:08:35.632 ************************************ 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:35.632 ************************************ 00:08:35.632 START TEST dd_flag_directory_forced_aio 00:08:35.632 ************************************ 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.632 08:08:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.632 [2024-11-17 08:08:40.632079] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:35.632 [2024-11-17 08:08:40.632223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62542 ] 00:08:35.892 [2024-11-17 08:08:40.815522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.152 [2024-11-17 08:08:40.954323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.411 [2024-11-17 08:08:41.187530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.411 [2024-11-17 08:08:41.323203] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:36.411 [2024-11-17 08:08:41.323279] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:36.411 [2024-11-17 08:08:41.323305] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:37.350 [2024-11-17 08:08:42.223389] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:37.610 08:08:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:37.869 [2024-11-17 08:08:42.625860] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:37.869 [2024-11-17 08:08:42.626804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62564 ] 00:08:37.869 [2024-11-17 08:08:42.830642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.129 [2024-11-17 08:08:42.968421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.388 [2024-11-17 08:08:43.207404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.388 [2024-11-17 08:08:43.335027] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:38.388 [2024-11-17 08:08:43.335097] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:38.388 [2024-11-17 08:08:43.335123] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.326 [2024-11-17 08:08:44.217818] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:39.587 00:08:39.587 real 0m3.975s 00:08:39.587 user 0m3.096s 00:08:39.587 sys 0m0.656s 00:08:39.587 ************************************ 00:08:39.587 END TEST dd_flag_directory_forced_aio 00:08:39.587 ************************************ 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:39.587 ************************************ 00:08:39.587 START TEST dd_flag_nofollow_forced_aio 00:08:39.587 ************************************ 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:39.587 08:08:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:39.847 [2024-11-17 08:08:44.699084] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:39.847 [2024-11-17 08:08:44.699390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62610 ] 00:08:40.106 [2024-11-17 08:08:44.885190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.106 [2024-11-17 08:08:45.016075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.364 [2024-11-17 08:08:45.263180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.622 [2024-11-17 08:08:45.399502] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:40.622 [2024-11-17 08:08:45.399807] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:40.622 [2024-11-17 08:08:45.399860] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.560 [2024-11-17 08:08:46.290551] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:41.560 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:41.560 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.560 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:41.560 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:41.560 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:41.560 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.560 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:41.560 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:41.560 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:41.560 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.560 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.560 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.819 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.819 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.819 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.819 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.819 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.819 08:08:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:41.819 [2024-11-17 08:08:46.671286] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:41.819 [2024-11-17 08:08:46.671413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62637 ] 00:08:42.079 [2024-11-17 08:08:46.852418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.079 [2024-11-17 08:08:46.984908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.338 [2024-11-17 08:08:47.214607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.597 [2024-11-17 08:08:47.348385] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:42.597 [2024-11-17 08:08:47.348458] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:42.597 [2024-11-17 08:08:47.348502] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.534 [2024-11-17 08:08:48.253315] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:43.534 08:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:43.534 08:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.534 08:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:43.534 08:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:43.534 08:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:43.534 08:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.534 08:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:43.534 08:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:43.534 08:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:43.794 08:08:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.794 [2024-11-17 08:08:48.647699] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:43.794 [2024-11-17 08:08:48.647998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62662 ] 00:08:44.052 [2024-11-17 08:08:48.833482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.052 [2024-11-17 08:08:48.970075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.325 [2024-11-17 08:08:49.214045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.592  [2024-11-17T08:08:50.550Z] Copying: 512/512 [B] (average 500 kBps) 00:08:45.538 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ epgrfa39f8cswnwc8bmu7gnj9v1nztqyhs5dx3femmdsx8eohdapmgtkbkztallt16ko7322mf3s61s4gfuvkq9g8h69ccajs5gy37b8iet3q9zgrta60jecarhyyjdx6h3xgf3mbxyt0gw7yohv84sgn6cvu8819026dajm5623aro8sbn95h1jfvliclts2har0mi553v9zwt8nws0hapma458aytocx036390owdue6agls16v2l58gm5p6nfszetxd5oajbxf5di5nbxmrk7ow3phy0zy2tgz803dy43wf2fqv2ffj8qitxylgdpa69o17nuekifnj3iubgsb2b4bbr266ybyoss8hbw9f1m5pd4dby3jagrgbseqsy9mblauk9vybewrk9hn9r6f1byzp28pr9vfz4giqv61k6x28166w5opjb6qss58uihu16qwbqcumnt9yw7bwivtraqdpdxs0idcd2qg3qf6xdomh1s3bslhlybfx31keeh == \e\p\g\r\f\a\3\9\f\8\c\s\w\n\w\c\8\b\m\u\7\g\n\j\9\v\1\n\z\t\q\y\h\s\5\d\x\3\f\e\m\m\d\s\x\8\e\o\h\d\a\p\m\g\t\k\b\k\z\t\a\l\l\t\1\6\k\o\7\3\2\2\m\f\3\s\6\1\s\4\g\f\u\v\k\q\9\g\8\h\6\9\c\c\a\j\s\5\g\y\3\7\b\8\i\e\t\3\q\9\z\g\r\t\a\6\0\j\e\c\a\r\h\y\y\j\d\x\6\h\3\x\g\f\3\m\b\x\y\t\0\g\w\7\y\o\h\v\8\4\s\g\n\6\c\v\u\8\8\1\9\0\2\6\d\a\j\m\5\6\2\3\a\r\o\8\s\b\n\9\5\h\1\j\f\v\l\i\c\l\t\s\2\h\a\r\0\m\i\5\5\3\v\9\z\w\t\8\n\w\s\0\h\a\p\m\a\4\5\8\a\y\t\o\c\x\0\3\6\3\9\0\o\w\d\u\e\6\a\g\l\s\1\6\v\2\l\5\8\g\m\5\p\6\n\f\s\z\e\t\x\d\5\o\a\j\b\x\f\5\d\i\5\n\b\x\m\r\k\7\o\w\3\p\h\y\0\z\y\2\t\g\z\8\0\3\d\y\4\3\w\f\2\f\q\v\2\f\f\j\8\q\i\t\x\y\l\g\d\p\a\6\9\o\1\7\n\u\e\k\i\f\n\j\3\i\u\b\g\s\b\2\b\4\b\b\r\2\6\6\y\b\y\o\s\s\8\h\b\w\9\f\1\m\5\p\d\4\d\b\y\3\j\a\g\r\g\b\s\e\q\s\y\9\m\b\l\a\u\k\9\v\y\b\e\w\r\k\9\h\n\9\r\6\f\1\b\y\z\p\2\8\p\r\9\v\f\z\4\g\i\q\v\6\1\k\6\x\2\8\1\6\6\w\5\o\p\j\b\6\q\s\s\5\8\u\i\h\u\1\6\q\w\b\q\c\u\m\n\t\9\y\w\7\b\w\i\v\t\r\a\q\d\p\d\x\s\0\i\d\c\d\2\q\g\3\q\f\6\x\d\o\m\h\1\s\3\b\s\l\h\l\y\b\f\x\3\1\k\e\e\h ]] 00:08:45.797 00:08:45.797 real 0m6.008s 00:08:45.797 user 0m4.711s 00:08:45.797 sys 0m0.951s 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.797 ************************************ 00:08:45.797 END TEST dd_flag_nofollow_forced_aio 00:08:45.797 ************************************ 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:45.797 ************************************ 00:08:45.797 START TEST dd_flag_noatime_forced_aio 00:08:45.797 ************************************ 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731830929 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731830930 00:08:45.797 08:08:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:46.736 08:08:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:46.996 [2024-11-17 08:08:51.807410] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:46.996 [2024-11-17 08:08:51.807555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62720 ] 00:08:46.996 [2024-11-17 08:08:51.992729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.256 [2024-11-17 08:08:52.123067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.515 [2024-11-17 08:08:52.365042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.515  [2024-11-17T08:08:53.905Z] Copying: 512/512 [B] (average 500 kBps) 00:08:48.893 00:08:48.894 08:08:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.894 08:08:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731830929 )) 00:08:48.894 08:08:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.894 08:08:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731830930 )) 00:08:48.894 08:08:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.894 [2024-11-17 08:08:53.806869] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:48.894 [2024-11-17 08:08:53.806994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62749 ] 00:08:49.153 [2024-11-17 08:08:53.989313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.153 [2024-11-17 08:08:54.119186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.413 [2024-11-17 08:08:54.345651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.672  [2024-11-17T08:08:56.063Z] Copying: 512/512 [B] (average 500 kBps) 00:08:51.051 00:08:51.051 08:08:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:51.051 08:08:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731830934 )) 00:08:51.051 00:08:51.051 real 0m5.025s 00:08:51.051 user 0m3.097s 00:08:51.051 sys 0m0.683s 00:08:51.051 ************************************ 00:08:51.051 END TEST dd_flag_noatime_forced_aio 00:08:51.051 ************************************ 00:08:51.051 08:08:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.051 08:08:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:51.052 ************************************ 00:08:51.052 START TEST dd_flags_misc_forced_aio 00:08:51.052 ************************************ 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:51.052 08:08:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:51.052 [2024-11-17 08:08:55.880188] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:51.052 [2024-11-17 08:08:55.880332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62793 ] 00:08:51.311 [2024-11-17 08:08:56.065605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.311 [2024-11-17 08:08:56.197988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.571 [2024-11-17 08:08:56.437219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.571  [2024-11-17T08:08:57.962Z] Copying: 512/512 [B] (average 500 kBps) 00:08:52.950 00:08:52.951 08:08:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v2jnmeao79mpxpbht10op3knsd8907eatcg8j6f5o6n0trxmbk6uyn6ukscy86knz0oe5wi89i6mom2k2cfx5mfct1e898uh5apb8hdcik798xz75um0vv9fv31kz5gke9cea06ulodm9qciwrmmm9bj5c4qvt0fy6rc69olwo39owtp76nmjreg2ykf8r1x0s1n6wxyvzeif60xhj2wg418ywawyl6wi6yev8kdhevtw9iiruyzcn8vgupxzei9suxlrn004xsow4abeigs4knunx9nwl2us4yynqeaqkvtspf5pwx8y0afx64s3t3g1gb7fg386pj9i3wtxzxqimrutfj0i9gr4d4xm7lj0oxsaxovpwy63euvun99iq01q7ufxhn54f4l8e4yqle5geboq6ebfhhvhxrwq19d6sxvv678defe9tm1ixnx5iayf32f17njctv1z48weo2vwehyuyoso9wxrlg9eenj0ftgohu75431ue3helvlhg1n == \v\2\j\n\m\e\a\o\7\9\m\p\x\p\b\h\t\1\0\o\p\3\k\n\s\d\8\9\0\7\e\a\t\c\g\8\j\6\f\5\o\6\n\0\t\r\x\m\b\k\6\u\y\n\6\u\k\s\c\y\8\6\k\n\z\0\o\e\5\w\i\8\9\i\6\m\o\m\2\k\2\c\f\x\5\m\f\c\t\1\e\8\9\8\u\h\5\a\p\b\8\h\d\c\i\k\7\9\8\x\z\7\5\u\m\0\v\v\9\f\v\3\1\k\z\5\g\k\e\9\c\e\a\0\6\u\l\o\d\m\9\q\c\i\w\r\m\m\m\9\b\j\5\c\4\q\v\t\0\f\y\6\r\c\6\9\o\l\w\o\3\9\o\w\t\p\7\6\n\m\j\r\e\g\2\y\k\f\8\r\1\x\0\s\1\n\6\w\x\y\v\z\e\i\f\6\0\x\h\j\2\w\g\4\1\8\y\w\a\w\y\l\6\w\i\6\y\e\v\8\k\d\h\e\v\t\w\9\i\i\r\u\y\z\c\n\8\v\g\u\p\x\z\e\i\9\s\u\x\l\r\n\0\0\4\x\s\o\w\4\a\b\e\i\g\s\4\k\n\u\n\x\9\n\w\l\2\u\s\4\y\y\n\q\e\a\q\k\v\t\s\p\f\5\p\w\x\8\y\0\a\f\x\6\4\s\3\t\3\g\1\g\b\7\f\g\3\8\6\p\j\9\i\3\w\t\x\z\x\q\i\m\r\u\t\f\j\0\i\9\g\r\4\d\4\x\m\7\l\j\0\o\x\s\a\x\o\v\p\w\y\6\3\e\u\v\u\n\9\9\i\q\0\1\q\7\u\f\x\h\n\5\4\f\4\l\8\e\4\y\q\l\e\5\g\e\b\o\q\6\e\b\f\h\h\v\h\x\r\w\q\1\9\d\6\s\x\v\v\6\7\8\d\e\f\e\9\t\m\1\i\x\n\x\5\i\a\y\f\3\2\f\1\7\n\j\c\t\v\1\z\4\8\w\e\o\2\v\w\e\h\y\u\y\o\s\o\9\w\x\r\l\g\9\e\e\n\j\0\f\t\g\o\h\u\7\5\4\3\1\u\e\3\h\e\l\v\l\h\g\1\n ]] 00:08:52.951 08:08:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:52.951 08:08:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:52.951 [2024-11-17 08:08:57.868761] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:52.951 [2024-11-17 08:08:57.868908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62817 ] 00:08:53.210 [2024-11-17 08:08:58.050676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.210 [2024-11-17 08:08:58.184855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.469 [2024-11-17 08:08:58.417609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.728  [2024-11-17T08:09:00.125Z] Copying: 512/512 [B] (average 500 kBps) 00:08:55.113 00:08:55.113 08:08:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v2jnmeao79mpxpbht10op3knsd8907eatcg8j6f5o6n0trxmbk6uyn6ukscy86knz0oe5wi89i6mom2k2cfx5mfct1e898uh5apb8hdcik798xz75um0vv9fv31kz5gke9cea06ulodm9qciwrmmm9bj5c4qvt0fy6rc69olwo39owtp76nmjreg2ykf8r1x0s1n6wxyvzeif60xhj2wg418ywawyl6wi6yev8kdhevtw9iiruyzcn8vgupxzei9suxlrn004xsow4abeigs4knunx9nwl2us4yynqeaqkvtspf5pwx8y0afx64s3t3g1gb7fg386pj9i3wtxzxqimrutfj0i9gr4d4xm7lj0oxsaxovpwy63euvun99iq01q7ufxhn54f4l8e4yqle5geboq6ebfhhvhxrwq19d6sxvv678defe9tm1ixnx5iayf32f17njctv1z48weo2vwehyuyoso9wxrlg9eenj0ftgohu75431ue3helvlhg1n == \v\2\j\n\m\e\a\o\7\9\m\p\x\p\b\h\t\1\0\o\p\3\k\n\s\d\8\9\0\7\e\a\t\c\g\8\j\6\f\5\o\6\n\0\t\r\x\m\b\k\6\u\y\n\6\u\k\s\c\y\8\6\k\n\z\0\o\e\5\w\i\8\9\i\6\m\o\m\2\k\2\c\f\x\5\m\f\c\t\1\e\8\9\8\u\h\5\a\p\b\8\h\d\c\i\k\7\9\8\x\z\7\5\u\m\0\v\v\9\f\v\3\1\k\z\5\g\k\e\9\c\e\a\0\6\u\l\o\d\m\9\q\c\i\w\r\m\m\m\9\b\j\5\c\4\q\v\t\0\f\y\6\r\c\6\9\o\l\w\o\3\9\o\w\t\p\7\6\n\m\j\r\e\g\2\y\k\f\8\r\1\x\0\s\1\n\6\w\x\y\v\z\e\i\f\6\0\x\h\j\2\w\g\4\1\8\y\w\a\w\y\l\6\w\i\6\y\e\v\8\k\d\h\e\v\t\w\9\i\i\r\u\y\z\c\n\8\v\g\u\p\x\z\e\i\9\s\u\x\l\r\n\0\0\4\x\s\o\w\4\a\b\e\i\g\s\4\k\n\u\n\x\9\n\w\l\2\u\s\4\y\y\n\q\e\a\q\k\v\t\s\p\f\5\p\w\x\8\y\0\a\f\x\6\4\s\3\t\3\g\1\g\b\7\f\g\3\8\6\p\j\9\i\3\w\t\x\z\x\q\i\m\r\u\t\f\j\0\i\9\g\r\4\d\4\x\m\7\l\j\0\o\x\s\a\x\o\v\p\w\y\6\3\e\u\v\u\n\9\9\i\q\0\1\q\7\u\f\x\h\n\5\4\f\4\l\8\e\4\y\q\l\e\5\g\e\b\o\q\6\e\b\f\h\h\v\h\x\r\w\q\1\9\d\6\s\x\v\v\6\7\8\d\e\f\e\9\t\m\1\i\x\n\x\5\i\a\y\f\3\2\f\1\7\n\j\c\t\v\1\z\4\8\w\e\o\2\v\w\e\h\y\u\y\o\s\o\9\w\x\r\l\g\9\e\e\n\j\0\f\t\g\o\h\u\7\5\4\3\1\u\e\3\h\e\l\v\l\h\g\1\n ]] 00:08:55.113 08:08:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:55.113 08:08:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:55.113 [2024-11-17 08:08:59.852082] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:55.113 [2024-11-17 08:08:59.852213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62838 ] 00:08:55.113 [2024-11-17 08:09:00.036530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.372 [2024-11-17 08:09:00.171908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.632 [2024-11-17 08:09:00.392218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.632  [2024-11-17T08:09:02.024Z] Copying: 512/512 [B] (average 125 kBps) 00:08:57.012 00:08:57.012 08:09:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v2jnmeao79mpxpbht10op3knsd8907eatcg8j6f5o6n0trxmbk6uyn6ukscy86knz0oe5wi89i6mom2k2cfx5mfct1e898uh5apb8hdcik798xz75um0vv9fv31kz5gke9cea06ulodm9qciwrmmm9bj5c4qvt0fy6rc69olwo39owtp76nmjreg2ykf8r1x0s1n6wxyvzeif60xhj2wg418ywawyl6wi6yev8kdhevtw9iiruyzcn8vgupxzei9suxlrn004xsow4abeigs4knunx9nwl2us4yynqeaqkvtspf5pwx8y0afx64s3t3g1gb7fg386pj9i3wtxzxqimrutfj0i9gr4d4xm7lj0oxsaxovpwy63euvun99iq01q7ufxhn54f4l8e4yqle5geboq6ebfhhvhxrwq19d6sxvv678defe9tm1ixnx5iayf32f17njctv1z48weo2vwehyuyoso9wxrlg9eenj0ftgohu75431ue3helvlhg1n == \v\2\j\n\m\e\a\o\7\9\m\p\x\p\b\h\t\1\0\o\p\3\k\n\s\d\8\9\0\7\e\a\t\c\g\8\j\6\f\5\o\6\n\0\t\r\x\m\b\k\6\u\y\n\6\u\k\s\c\y\8\6\k\n\z\0\o\e\5\w\i\8\9\i\6\m\o\m\2\k\2\c\f\x\5\m\f\c\t\1\e\8\9\8\u\h\5\a\p\b\8\h\d\c\i\k\7\9\8\x\z\7\5\u\m\0\v\v\9\f\v\3\1\k\z\5\g\k\e\9\c\e\a\0\6\u\l\o\d\m\9\q\c\i\w\r\m\m\m\9\b\j\5\c\4\q\v\t\0\f\y\6\r\c\6\9\o\l\w\o\3\9\o\w\t\p\7\6\n\m\j\r\e\g\2\y\k\f\8\r\1\x\0\s\1\n\6\w\x\y\v\z\e\i\f\6\0\x\h\j\2\w\g\4\1\8\y\w\a\w\y\l\6\w\i\6\y\e\v\8\k\d\h\e\v\t\w\9\i\i\r\u\y\z\c\n\8\v\g\u\p\x\z\e\i\9\s\u\x\l\r\n\0\0\4\x\s\o\w\4\a\b\e\i\g\s\4\k\n\u\n\x\9\n\w\l\2\u\s\4\y\y\n\q\e\a\q\k\v\t\s\p\f\5\p\w\x\8\y\0\a\f\x\6\4\s\3\t\3\g\1\g\b\7\f\g\3\8\6\p\j\9\i\3\w\t\x\z\x\q\i\m\r\u\t\f\j\0\i\9\g\r\4\d\4\x\m\7\l\j\0\o\x\s\a\x\o\v\p\w\y\6\3\e\u\v\u\n\9\9\i\q\0\1\q\7\u\f\x\h\n\5\4\f\4\l\8\e\4\y\q\l\e\5\g\e\b\o\q\6\e\b\f\h\h\v\h\x\r\w\q\1\9\d\6\s\x\v\v\6\7\8\d\e\f\e\9\t\m\1\i\x\n\x\5\i\a\y\f\3\2\f\1\7\n\j\c\t\v\1\z\4\8\w\e\o\2\v\w\e\h\y\u\y\o\s\o\9\w\x\r\l\g\9\e\e\n\j\0\f\t\g\o\h\u\7\5\4\3\1\u\e\3\h\e\l\v\l\h\g\1\n ]] 00:08:57.012 08:09:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:57.012 08:09:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:57.012 [2024-11-17 08:09:01.837340] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:57.012 [2024-11-17 08:09:01.837480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62863 ] 00:08:57.012 [2024-11-17 08:09:02.020839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.272 [2024-11-17 08:09:02.167868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.531 [2024-11-17 08:09:02.401380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.531  [2024-11-17T08:09:03.923Z] Copying: 512/512 [B] (average 500 kBps) 00:08:58.911 00:08:58.911 08:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v2jnmeao79mpxpbht10op3knsd8907eatcg8j6f5o6n0trxmbk6uyn6ukscy86knz0oe5wi89i6mom2k2cfx5mfct1e898uh5apb8hdcik798xz75um0vv9fv31kz5gke9cea06ulodm9qciwrmmm9bj5c4qvt0fy6rc69olwo39owtp76nmjreg2ykf8r1x0s1n6wxyvzeif60xhj2wg418ywawyl6wi6yev8kdhevtw9iiruyzcn8vgupxzei9suxlrn004xsow4abeigs4knunx9nwl2us4yynqeaqkvtspf5pwx8y0afx64s3t3g1gb7fg386pj9i3wtxzxqimrutfj0i9gr4d4xm7lj0oxsaxovpwy63euvun99iq01q7ufxhn54f4l8e4yqle5geboq6ebfhhvhxrwq19d6sxvv678defe9tm1ixnx5iayf32f17njctv1z48weo2vwehyuyoso9wxrlg9eenj0ftgohu75431ue3helvlhg1n == \v\2\j\n\m\e\a\o\7\9\m\p\x\p\b\h\t\1\0\o\p\3\k\n\s\d\8\9\0\7\e\a\t\c\g\8\j\6\f\5\o\6\n\0\t\r\x\m\b\k\6\u\y\n\6\u\k\s\c\y\8\6\k\n\z\0\o\e\5\w\i\8\9\i\6\m\o\m\2\k\2\c\f\x\5\m\f\c\t\1\e\8\9\8\u\h\5\a\p\b\8\h\d\c\i\k\7\9\8\x\z\7\5\u\m\0\v\v\9\f\v\3\1\k\z\5\g\k\e\9\c\e\a\0\6\u\l\o\d\m\9\q\c\i\w\r\m\m\m\9\b\j\5\c\4\q\v\t\0\f\y\6\r\c\6\9\o\l\w\o\3\9\o\w\t\p\7\6\n\m\j\r\e\g\2\y\k\f\8\r\1\x\0\s\1\n\6\w\x\y\v\z\e\i\f\6\0\x\h\j\2\w\g\4\1\8\y\w\a\w\y\l\6\w\i\6\y\e\v\8\k\d\h\e\v\t\w\9\i\i\r\u\y\z\c\n\8\v\g\u\p\x\z\e\i\9\s\u\x\l\r\n\0\0\4\x\s\o\w\4\a\b\e\i\g\s\4\k\n\u\n\x\9\n\w\l\2\u\s\4\y\y\n\q\e\a\q\k\v\t\s\p\f\5\p\w\x\8\y\0\a\f\x\6\4\s\3\t\3\g\1\g\b\7\f\g\3\8\6\p\j\9\i\3\w\t\x\z\x\q\i\m\r\u\t\f\j\0\i\9\g\r\4\d\4\x\m\7\l\j\0\o\x\s\a\x\o\v\p\w\y\6\3\e\u\v\u\n\9\9\i\q\0\1\q\7\u\f\x\h\n\5\4\f\4\l\8\e\4\y\q\l\e\5\g\e\b\o\q\6\e\b\f\h\h\v\h\x\r\w\q\1\9\d\6\s\x\v\v\6\7\8\d\e\f\e\9\t\m\1\i\x\n\x\5\i\a\y\f\3\2\f\1\7\n\j\c\t\v\1\z\4\8\w\e\o\2\v\w\e\h\y\u\y\o\s\o\9\w\x\r\l\g\9\e\e\n\j\0\f\t\g\o\h\u\7\5\4\3\1\u\e\3\h\e\l\v\l\h\g\1\n ]] 00:08:58.911 08:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:58.911 08:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:58.911 08:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:58.911 08:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:58.911 08:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:58.911 08:09:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:58.911 [2024-11-17 08:09:03.856112] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:58.911 [2024-11-17 08:09:03.856240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62883 ] 00:08:59.170 [2024-11-17 08:09:04.040786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.170 [2024-11-17 08:09:04.172500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.429 [2024-11-17 08:09:04.413475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.688  [2024-11-17T08:09:06.079Z] Copying: 512/512 [B] (average 500 kBps) 00:09:01.067 00:09:01.067 08:09:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vew2s5zi7y31gyr9d1hkm3fo2kdp5tkr1z6m9mpoezk4jze7hqd50y5slizf08yqisytmw4ur70n51s9bz1l41trus18451gsk3nsfgll9qlbwbyf5eq8gfzf0pqga3ghwotso00hhknnfpy2le2o2jrkj3h34656l3wz1ifyjghxsb1ft7cs57g4hshovzes63q6kz7fh0y1kim9ze0votxrbh078xvgta7owth9du9g7xidobira3szszpaaqvrwjph9mvy8vo7haa2xxpp0a021u4cs4esa13ony3fiqqw9kkcdvpheo76mjyhn3y8ocbybf0wq7vl3wmsxaq1732wfjtd3cpbs3tzwgnngx9msabp79f688pzpadwfpx85jhmeds1qtz9ddyo1eack4ff40315abak3sacpmp8palnqs6mev8hy5c2m34pe1t3ut00is8j8280pvxvavsreaxbe3iri89c36oll0ikmlt96f4aywvjy4h2gkd7o3 == \v\e\w\2\s\5\z\i\7\y\3\1\g\y\r\9\d\1\h\k\m\3\f\o\2\k\d\p\5\t\k\r\1\z\6\m\9\m\p\o\e\z\k\4\j\z\e\7\h\q\d\5\0\y\5\s\l\i\z\f\0\8\y\q\i\s\y\t\m\w\4\u\r\7\0\n\5\1\s\9\b\z\1\l\4\1\t\r\u\s\1\8\4\5\1\g\s\k\3\n\s\f\g\l\l\9\q\l\b\w\b\y\f\5\e\q\8\g\f\z\f\0\p\q\g\a\3\g\h\w\o\t\s\o\0\0\h\h\k\n\n\f\p\y\2\l\e\2\o\2\j\r\k\j\3\h\3\4\6\5\6\l\3\w\z\1\i\f\y\j\g\h\x\s\b\1\f\t\7\c\s\5\7\g\4\h\s\h\o\v\z\e\s\6\3\q\6\k\z\7\f\h\0\y\1\k\i\m\9\z\e\0\v\o\t\x\r\b\h\0\7\8\x\v\g\t\a\7\o\w\t\h\9\d\u\9\g\7\x\i\d\o\b\i\r\a\3\s\z\s\z\p\a\a\q\v\r\w\j\p\h\9\m\v\y\8\v\o\7\h\a\a\2\x\x\p\p\0\a\0\2\1\u\4\c\s\4\e\s\a\1\3\o\n\y\3\f\i\q\q\w\9\k\k\c\d\v\p\h\e\o\7\6\m\j\y\h\n\3\y\8\o\c\b\y\b\f\0\w\q\7\v\l\3\w\m\s\x\a\q\1\7\3\2\w\f\j\t\d\3\c\p\b\s\3\t\z\w\g\n\n\g\x\9\m\s\a\b\p\7\9\f\6\8\8\p\z\p\a\d\w\f\p\x\8\5\j\h\m\e\d\s\1\q\t\z\9\d\d\y\o\1\e\a\c\k\4\f\f\4\0\3\1\5\a\b\a\k\3\s\a\c\p\m\p\8\p\a\l\n\q\s\6\m\e\v\8\h\y\5\c\2\m\3\4\p\e\1\t\3\u\t\0\0\i\s\8\j\8\2\8\0\p\v\x\v\a\v\s\r\e\a\x\b\e\3\i\r\i\8\9\c\3\6\o\l\l\0\i\k\m\l\t\9\6\f\4\a\y\w\v\j\y\4\h\2\g\k\d\7\o\3 ]] 00:09:01.067 08:09:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:01.067 08:09:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:01.067 [2024-11-17 08:09:05.858460] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:01.067 [2024-11-17 08:09:05.858580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62907 ] 00:09:01.067 [2024-11-17 08:09:06.039538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.326 [2024-11-17 08:09:06.172847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.585 [2024-11-17 08:09:06.403661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.585  [2024-11-17T08:09:07.974Z] Copying: 512/512 [B] (average 500 kBps) 00:09:02.962 00:09:02.962 08:09:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vew2s5zi7y31gyr9d1hkm3fo2kdp5tkr1z6m9mpoezk4jze7hqd50y5slizf08yqisytmw4ur70n51s9bz1l41trus18451gsk3nsfgll9qlbwbyf5eq8gfzf0pqga3ghwotso00hhknnfpy2le2o2jrkj3h34656l3wz1ifyjghxsb1ft7cs57g4hshovzes63q6kz7fh0y1kim9ze0votxrbh078xvgta7owth9du9g7xidobira3szszpaaqvrwjph9mvy8vo7haa2xxpp0a021u4cs4esa13ony3fiqqw9kkcdvpheo76mjyhn3y8ocbybf0wq7vl3wmsxaq1732wfjtd3cpbs3tzwgnngx9msabp79f688pzpadwfpx85jhmeds1qtz9ddyo1eack4ff40315abak3sacpmp8palnqs6mev8hy5c2m34pe1t3ut00is8j8280pvxvavsreaxbe3iri89c36oll0ikmlt96f4aywvjy4h2gkd7o3 == \v\e\w\2\s\5\z\i\7\y\3\1\g\y\r\9\d\1\h\k\m\3\f\o\2\k\d\p\5\t\k\r\1\z\6\m\9\m\p\o\e\z\k\4\j\z\e\7\h\q\d\5\0\y\5\s\l\i\z\f\0\8\y\q\i\s\y\t\m\w\4\u\r\7\0\n\5\1\s\9\b\z\1\l\4\1\t\r\u\s\1\8\4\5\1\g\s\k\3\n\s\f\g\l\l\9\q\l\b\w\b\y\f\5\e\q\8\g\f\z\f\0\p\q\g\a\3\g\h\w\o\t\s\o\0\0\h\h\k\n\n\f\p\y\2\l\e\2\o\2\j\r\k\j\3\h\3\4\6\5\6\l\3\w\z\1\i\f\y\j\g\h\x\s\b\1\f\t\7\c\s\5\7\g\4\h\s\h\o\v\z\e\s\6\3\q\6\k\z\7\f\h\0\y\1\k\i\m\9\z\e\0\v\o\t\x\r\b\h\0\7\8\x\v\g\t\a\7\o\w\t\h\9\d\u\9\g\7\x\i\d\o\b\i\r\a\3\s\z\s\z\p\a\a\q\v\r\w\j\p\h\9\m\v\y\8\v\o\7\h\a\a\2\x\x\p\p\0\a\0\2\1\u\4\c\s\4\e\s\a\1\3\o\n\y\3\f\i\q\q\w\9\k\k\c\d\v\p\h\e\o\7\6\m\j\y\h\n\3\y\8\o\c\b\y\b\f\0\w\q\7\v\l\3\w\m\s\x\a\q\1\7\3\2\w\f\j\t\d\3\c\p\b\s\3\t\z\w\g\n\n\g\x\9\m\s\a\b\p\7\9\f\6\8\8\p\z\p\a\d\w\f\p\x\8\5\j\h\m\e\d\s\1\q\t\z\9\d\d\y\o\1\e\a\c\k\4\f\f\4\0\3\1\5\a\b\a\k\3\s\a\c\p\m\p\8\p\a\l\n\q\s\6\m\e\v\8\h\y\5\c\2\m\3\4\p\e\1\t\3\u\t\0\0\i\s\8\j\8\2\8\0\p\v\x\v\a\v\s\r\e\a\x\b\e\3\i\r\i\8\9\c\3\6\o\l\l\0\i\k\m\l\t\9\6\f\4\a\y\w\v\j\y\4\h\2\g\k\d\7\o\3 ]] 00:09:02.962 08:09:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:02.962 08:09:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:02.962 [2024-11-17 08:09:07.832577] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:02.962 [2024-11-17 08:09:07.832722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62932 ] 00:09:03.222 [2024-11-17 08:09:08.015266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.222 [2024-11-17 08:09:08.150822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.482 [2024-11-17 08:09:08.390790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.741  [2024-11-17T08:09:10.133Z] Copying: 512/512 [B] (average 250 kBps) 00:09:05.121 00:09:05.122 08:09:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vew2s5zi7y31gyr9d1hkm3fo2kdp5tkr1z6m9mpoezk4jze7hqd50y5slizf08yqisytmw4ur70n51s9bz1l41trus18451gsk3nsfgll9qlbwbyf5eq8gfzf0pqga3ghwotso00hhknnfpy2le2o2jrkj3h34656l3wz1ifyjghxsb1ft7cs57g4hshovzes63q6kz7fh0y1kim9ze0votxrbh078xvgta7owth9du9g7xidobira3szszpaaqvrwjph9mvy8vo7haa2xxpp0a021u4cs4esa13ony3fiqqw9kkcdvpheo76mjyhn3y8ocbybf0wq7vl3wmsxaq1732wfjtd3cpbs3tzwgnngx9msabp79f688pzpadwfpx85jhmeds1qtz9ddyo1eack4ff40315abak3sacpmp8palnqs6mev8hy5c2m34pe1t3ut00is8j8280pvxvavsreaxbe3iri89c36oll0ikmlt96f4aywvjy4h2gkd7o3 == \v\e\w\2\s\5\z\i\7\y\3\1\g\y\r\9\d\1\h\k\m\3\f\o\2\k\d\p\5\t\k\r\1\z\6\m\9\m\p\o\e\z\k\4\j\z\e\7\h\q\d\5\0\y\5\s\l\i\z\f\0\8\y\q\i\s\y\t\m\w\4\u\r\7\0\n\5\1\s\9\b\z\1\l\4\1\t\r\u\s\1\8\4\5\1\g\s\k\3\n\s\f\g\l\l\9\q\l\b\w\b\y\f\5\e\q\8\g\f\z\f\0\p\q\g\a\3\g\h\w\o\t\s\o\0\0\h\h\k\n\n\f\p\y\2\l\e\2\o\2\j\r\k\j\3\h\3\4\6\5\6\l\3\w\z\1\i\f\y\j\g\h\x\s\b\1\f\t\7\c\s\5\7\g\4\h\s\h\o\v\z\e\s\6\3\q\6\k\z\7\f\h\0\y\1\k\i\m\9\z\e\0\v\o\t\x\r\b\h\0\7\8\x\v\g\t\a\7\o\w\t\h\9\d\u\9\g\7\x\i\d\o\b\i\r\a\3\s\z\s\z\p\a\a\q\v\r\w\j\p\h\9\m\v\y\8\v\o\7\h\a\a\2\x\x\p\p\0\a\0\2\1\u\4\c\s\4\e\s\a\1\3\o\n\y\3\f\i\q\q\w\9\k\k\c\d\v\p\h\e\o\7\6\m\j\y\h\n\3\y\8\o\c\b\y\b\f\0\w\q\7\v\l\3\w\m\s\x\a\q\1\7\3\2\w\f\j\t\d\3\c\p\b\s\3\t\z\w\g\n\n\g\x\9\m\s\a\b\p\7\9\f\6\8\8\p\z\p\a\d\w\f\p\x\8\5\j\h\m\e\d\s\1\q\t\z\9\d\d\y\o\1\e\a\c\k\4\f\f\4\0\3\1\5\a\b\a\k\3\s\a\c\p\m\p\8\p\a\l\n\q\s\6\m\e\v\8\h\y\5\c\2\m\3\4\p\e\1\t\3\u\t\0\0\i\s\8\j\8\2\8\0\p\v\x\v\a\v\s\r\e\a\x\b\e\3\i\r\i\8\9\c\3\6\o\l\l\0\i\k\m\l\t\9\6\f\4\a\y\w\v\j\y\4\h\2\g\k\d\7\o\3 ]] 00:09:05.122 08:09:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:05.122 08:09:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:05.122 [2024-11-17 08:09:09.825623] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:05.122 [2024-11-17 08:09:09.825753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62956 ] 00:09:05.122 [2024-11-17 08:09:10.009860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.381 [2024-11-17 08:09:10.155882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.381 [2024-11-17 08:09:10.387207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.640  [2024-11-17T08:09:12.031Z] Copying: 512/512 [B] (average 500 kBps) 00:09:07.019 00:09:07.019 08:09:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vew2s5zi7y31gyr9d1hkm3fo2kdp5tkr1z6m9mpoezk4jze7hqd50y5slizf08yqisytmw4ur70n51s9bz1l41trus18451gsk3nsfgll9qlbwbyf5eq8gfzf0pqga3ghwotso00hhknnfpy2le2o2jrkj3h34656l3wz1ifyjghxsb1ft7cs57g4hshovzes63q6kz7fh0y1kim9ze0votxrbh078xvgta7owth9du9g7xidobira3szszpaaqvrwjph9mvy8vo7haa2xxpp0a021u4cs4esa13ony3fiqqw9kkcdvpheo76mjyhn3y8ocbybf0wq7vl3wmsxaq1732wfjtd3cpbs3tzwgnngx9msabp79f688pzpadwfpx85jhmeds1qtz9ddyo1eack4ff40315abak3sacpmp8palnqs6mev8hy5c2m34pe1t3ut00is8j8280pvxvavsreaxbe3iri89c36oll0ikmlt96f4aywvjy4h2gkd7o3 == \v\e\w\2\s\5\z\i\7\y\3\1\g\y\r\9\d\1\h\k\m\3\f\o\2\k\d\p\5\t\k\r\1\z\6\m\9\m\p\o\e\z\k\4\j\z\e\7\h\q\d\5\0\y\5\s\l\i\z\f\0\8\y\q\i\s\y\t\m\w\4\u\r\7\0\n\5\1\s\9\b\z\1\l\4\1\t\r\u\s\1\8\4\5\1\g\s\k\3\n\s\f\g\l\l\9\q\l\b\w\b\y\f\5\e\q\8\g\f\z\f\0\p\q\g\a\3\g\h\w\o\t\s\o\0\0\h\h\k\n\n\f\p\y\2\l\e\2\o\2\j\r\k\j\3\h\3\4\6\5\6\l\3\w\z\1\i\f\y\j\g\h\x\s\b\1\f\t\7\c\s\5\7\g\4\h\s\h\o\v\z\e\s\6\3\q\6\k\z\7\f\h\0\y\1\k\i\m\9\z\e\0\v\o\t\x\r\b\h\0\7\8\x\v\g\t\a\7\o\w\t\h\9\d\u\9\g\7\x\i\d\o\b\i\r\a\3\s\z\s\z\p\a\a\q\v\r\w\j\p\h\9\m\v\y\8\v\o\7\h\a\a\2\x\x\p\p\0\a\0\2\1\u\4\c\s\4\e\s\a\1\3\o\n\y\3\f\i\q\q\w\9\k\k\c\d\v\p\h\e\o\7\6\m\j\y\h\n\3\y\8\o\c\b\y\b\f\0\w\q\7\v\l\3\w\m\s\x\a\q\1\7\3\2\w\f\j\t\d\3\c\p\b\s\3\t\z\w\g\n\n\g\x\9\m\s\a\b\p\7\9\f\6\8\8\p\z\p\a\d\w\f\p\x\8\5\j\h\m\e\d\s\1\q\t\z\9\d\d\y\o\1\e\a\c\k\4\f\f\4\0\3\1\5\a\b\a\k\3\s\a\c\p\m\p\8\p\a\l\n\q\s\6\m\e\v\8\h\y\5\c\2\m\3\4\p\e\1\t\3\u\t\0\0\i\s\8\j\8\2\8\0\p\v\x\v\a\v\s\r\e\a\x\b\e\3\i\r\i\8\9\c\3\6\o\l\l\0\i\k\m\l\t\9\6\f\4\a\y\w\v\j\y\4\h\2\g\k\d\7\o\3 ]] 00:09:07.019 00:09:07.019 real 0m15.960s 00:09:07.019 user 0m12.445s 00:09:07.019 sys 0m2.529s 00:09:07.019 08:09:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.019 08:09:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:07.019 ************************************ 00:09:07.019 END TEST dd_flags_misc_forced_aio 00:09:07.019 ************************************ 00:09:07.019 08:09:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:09:07.019 08:09:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:07.019 08:09:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:07.019 00:09:07.019 real 1m6.850s 00:09:07.019 user 0m50.261s 00:09:07.019 sys 0m21.767s 00:09:07.019 08:09:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.019 08:09:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:07.019 ************************************ 00:09:07.019 END TEST spdk_dd_posix 00:09:07.019 ************************************ 00:09:07.019 08:09:11 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:07.019 08:09:11 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.019 08:09:11 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.019 08:09:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:07.019 ************************************ 00:09:07.019 START TEST spdk_dd_malloc 00:09:07.019 ************************************ 00:09:07.019 08:09:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:07.019 * Looking for test storage... 00:09:07.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:07.019 08:09:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:07.019 08:09:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:07.019 08:09:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:07.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.280 --rc genhtml_branch_coverage=1 00:09:07.280 --rc genhtml_function_coverage=1 00:09:07.280 --rc genhtml_legend=1 00:09:07.280 --rc geninfo_all_blocks=1 00:09:07.280 --rc geninfo_unexecuted_blocks=1 00:09:07.280 00:09:07.280 ' 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:07.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.280 --rc genhtml_branch_coverage=1 00:09:07.280 --rc genhtml_function_coverage=1 00:09:07.280 --rc genhtml_legend=1 00:09:07.280 --rc geninfo_all_blocks=1 00:09:07.280 --rc geninfo_unexecuted_blocks=1 00:09:07.280 00:09:07.280 ' 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:07.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.280 --rc genhtml_branch_coverage=1 00:09:07.280 --rc genhtml_function_coverage=1 00:09:07.280 --rc genhtml_legend=1 00:09:07.280 --rc geninfo_all_blocks=1 00:09:07.280 --rc geninfo_unexecuted_blocks=1 00:09:07.280 00:09:07.280 ' 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:07.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.280 --rc genhtml_branch_coverage=1 00:09:07.280 --rc genhtml_function_coverage=1 00:09:07.280 --rc genhtml_legend=1 00:09:07.280 --rc geninfo_all_blocks=1 00:09:07.280 --rc geninfo_unexecuted_blocks=1 00:09:07.280 00:09:07.280 ' 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:07.280 ************************************ 00:09:07.280 START TEST dd_malloc_copy 00:09:07.280 ************************************ 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:07.280 08:09:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:07.280 { 00:09:07.280 "subsystems": [ 00:09:07.280 { 00:09:07.280 "subsystem": "bdev", 00:09:07.280 "config": [ 00:09:07.280 { 00:09:07.280 "params": { 00:09:07.280 "block_size": 512, 00:09:07.280 "num_blocks": 1048576, 00:09:07.280 "name": "malloc0" 00:09:07.280 }, 00:09:07.280 "method": "bdev_malloc_create" 00:09:07.280 }, 00:09:07.280 { 00:09:07.280 "params": { 00:09:07.280 "block_size": 512, 00:09:07.280 "num_blocks": 1048576, 00:09:07.280 "name": "malloc1" 00:09:07.280 }, 00:09:07.280 "method": "bdev_malloc_create" 00:09:07.280 }, 00:09:07.280 { 00:09:07.280 "method": "bdev_wait_for_examine" 00:09:07.280 } 00:09:07.280 ] 00:09:07.280 } 00:09:07.280 ] 00:09:07.280 } 00:09:07.281 [2024-11-17 08:09:12.227646] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:07.281 [2024-11-17 08:09:12.227779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63057 ] 00:09:07.540 [2024-11-17 08:09:12.412145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.540 [2024-11-17 08:09:12.543528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.799 [2024-11-17 08:09:12.789191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.338  [2024-11-17T08:09:16.288Z] Copying: 235/512 [MB] (235 MBps) [2024-11-17T08:09:16.547Z] Copying: 474/512 [MB] (239 MBps) [2024-11-17T08:09:20.810Z] Copying: 512/512 [MB] (average 237 MBps) 00:09:15.798 00:09:15.798 08:09:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:15.798 08:09:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:09:15.798 08:09:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:15.798 08:09:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:15.798 { 00:09:15.798 "subsystems": [ 00:09:15.798 { 00:09:15.798 "subsystem": "bdev", 00:09:15.798 "config": [ 00:09:15.798 { 00:09:15.798 "params": { 00:09:15.798 "block_size": 512, 00:09:15.798 "num_blocks": 1048576, 00:09:15.798 "name": "malloc0" 00:09:15.798 }, 00:09:15.798 "method": "bdev_malloc_create" 00:09:15.798 }, 00:09:15.798 { 00:09:15.798 "params": { 00:09:15.798 "block_size": 512, 00:09:15.798 "num_blocks": 1048576, 00:09:15.798 "name": "malloc1" 00:09:15.798 }, 00:09:15.798 "method": "bdev_malloc_create" 00:09:15.798 }, 00:09:15.798 { 00:09:15.798 "method": "bdev_wait_for_examine" 00:09:15.798 } 00:09:15.798 ] 00:09:15.798 } 00:09:15.798 ] 00:09:15.798 } 00:09:15.798 [2024-11-17 08:09:20.602196] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:15.798 [2024-11-17 08:09:20.602350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63150 ] 00:09:15.798 [2024-11-17 08:09:20.788694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.057 [2024-11-17 08:09:20.921931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.317 [2024-11-17 08:09:21.161165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.855  [2024-11-17T08:09:24.805Z] Copying: 234/512 [MB] (234 MBps) [2024-11-17T08:09:24.805Z] Copying: 468/512 [MB] (234 MBps) [2024-11-17T08:09:29.000Z] Copying: 512/512 [MB] (average 234 MBps) 00:09:23.988 00:09:23.988 00:09:23.988 real 0m16.801s 00:09:23.988 user 0m14.974s 00:09:23.988 sys 0m1.633s 00:09:23.988 08:09:28 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.988 08:09:28 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:23.988 ************************************ 00:09:23.988 END TEST dd_malloc_copy 00:09:23.988 ************************************ 00:09:23.988 00:09:23.988 real 0m17.119s 00:09:23.988 user 0m15.135s 00:09:23.988 sys 0m1.803s 00:09:23.988 08:09:28 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.988 08:09:28 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:23.988 ************************************ 00:09:23.988 END TEST spdk_dd_malloc 00:09:23.988 ************************************ 00:09:24.248 08:09:29 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:24.248 08:09:29 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:24.248 08:09:29 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.248 08:09:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:24.248 ************************************ 00:09:24.248 START TEST spdk_dd_bdev_to_bdev 00:09:24.248 ************************************ 00:09:24.248 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:24.248 * Looking for test storage... 00:09:24.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:24.248 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:24.248 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:09:24.248 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:24.508 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:24.508 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.508 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:24.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.509 --rc genhtml_branch_coverage=1 00:09:24.509 --rc genhtml_function_coverage=1 00:09:24.509 --rc genhtml_legend=1 00:09:24.509 --rc geninfo_all_blocks=1 00:09:24.509 --rc geninfo_unexecuted_blocks=1 00:09:24.509 00:09:24.509 ' 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:24.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.509 --rc genhtml_branch_coverage=1 00:09:24.509 --rc genhtml_function_coverage=1 00:09:24.509 --rc genhtml_legend=1 00:09:24.509 --rc geninfo_all_blocks=1 00:09:24.509 --rc geninfo_unexecuted_blocks=1 00:09:24.509 00:09:24.509 ' 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:24.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.509 --rc genhtml_branch_coverage=1 00:09:24.509 --rc genhtml_function_coverage=1 00:09:24.509 --rc genhtml_legend=1 00:09:24.509 --rc geninfo_all_blocks=1 00:09:24.509 --rc geninfo_unexecuted_blocks=1 00:09:24.509 00:09:24.509 ' 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:24.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.509 --rc genhtml_branch_coverage=1 00:09:24.509 --rc genhtml_function_coverage=1 00:09:24.509 --rc genhtml_legend=1 00:09:24.509 --rc geninfo_all_blocks=1 00:09:24.509 --rc geninfo_unexecuted_blocks=1 00:09:24.509 00:09:24.509 ' 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:24.509 ************************************ 00:09:24.509 START TEST dd_inflate_file 00:09:24.509 ************************************ 00:09:24.509 08:09:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:24.509 [2024-11-17 08:09:29.443740] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:24.509 [2024-11-17 08:09:29.443860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63318 ] 00:09:24.769 [2024-11-17 08:09:29.627075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.769 [2024-11-17 08:09:29.760400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.028 [2024-11-17 08:09:29.999735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:25.288  [2024-11-17T08:09:31.681Z] Copying: 64/64 [MB] (average 1185 MBps) 00:09:26.669 00:09:26.669 ************************************ 00:09:26.669 END TEST dd_inflate_file 00:09:26.669 ************************************ 00:09:26.669 00:09:26.669 real 0m2.011s 00:09:26.669 user 0m1.582s 00:09:26.669 sys 0m1.294s 00:09:26.669 08:09:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.669 08:09:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:09:26.669 08:09:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:26.669 08:09:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:26.669 08:09:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:26.669 08:09:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:26.669 08:09:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:26.669 08:09:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.669 08:09:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:26.669 08:09:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:26.669 08:09:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:26.669 ************************************ 00:09:26.669 START TEST dd_copy_to_out_bdev 00:09:26.669 ************************************ 00:09:26.669 08:09:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:26.669 { 00:09:26.669 "subsystems": [ 00:09:26.669 { 00:09:26.669 "subsystem": "bdev", 00:09:26.669 "config": [ 00:09:26.669 { 00:09:26.669 "params": { 00:09:26.669 "trtype": "pcie", 00:09:26.669 "traddr": "0000:00:10.0", 00:09:26.669 "name": "Nvme0" 00:09:26.669 }, 00:09:26.669 "method": "bdev_nvme_attach_controller" 00:09:26.669 }, 00:09:26.669 { 00:09:26.669 "params": { 00:09:26.669 "trtype": "pcie", 00:09:26.669 "traddr": "0000:00:11.0", 00:09:26.669 "name": "Nvme1" 00:09:26.669 }, 00:09:26.669 "method": "bdev_nvme_attach_controller" 00:09:26.669 }, 00:09:26.669 { 00:09:26.669 "method": "bdev_wait_for_examine" 00:09:26.669 } 00:09:26.669 ] 00:09:26.669 } 00:09:26.669 ] 00:09:26.669 } 00:09:26.669 [2024-11-17 08:09:31.542892] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:26.669 [2024-11-17 08:09:31.543016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63371 ] 00:09:26.929 [2024-11-17 08:09:31.724495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.929 [2024-11-17 08:09:31.856196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.188 [2024-11-17 08:09:32.090664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:28.568  [2024-11-17T08:09:34.518Z] Copying: 64/64 [MB] (average 79 MBps) 00:09:29.506 00:09:29.506 00:09:29.506 real 0m2.950s 00:09:29.506 user 0m2.537s 00:09:29.506 sys 0m2.115s 00:09:29.506 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.507 ************************************ 00:09:29.507 END TEST dd_copy_to_out_bdev 00:09:29.507 ************************************ 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:29.507 ************************************ 00:09:29.507 START TEST dd_offset_magic 00:09:29.507 ************************************ 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:29.507 08:09:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:29.766 { 00:09:29.766 "subsystems": [ 00:09:29.766 { 00:09:29.766 "subsystem": "bdev", 00:09:29.766 "config": [ 00:09:29.766 { 00:09:29.766 "params": { 00:09:29.766 "trtype": "pcie", 00:09:29.766 "traddr": "0000:00:10.0", 00:09:29.766 "name": "Nvme0" 00:09:29.766 }, 00:09:29.766 "method": "bdev_nvme_attach_controller" 00:09:29.766 }, 00:09:29.766 { 00:09:29.766 "params": { 00:09:29.766 "trtype": "pcie", 00:09:29.766 "traddr": "0000:00:11.0", 00:09:29.766 "name": "Nvme1" 00:09:29.766 }, 00:09:29.766 "method": "bdev_nvme_attach_controller" 00:09:29.766 }, 00:09:29.766 { 00:09:29.766 "method": "bdev_wait_for_examine" 00:09:29.766 } 00:09:29.766 ] 00:09:29.766 } 00:09:29.766 ] 00:09:29.766 } 00:09:29.766 [2024-11-17 08:09:34.577869] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:29.766 [2024-11-17 08:09:34.577983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63424 ] 00:09:29.766 [2024-11-17 08:09:34.756345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.025 [2024-11-17 08:09:34.862650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.284 [2024-11-17 08:09:35.062785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.544  [2024-11-17T08:09:36.494Z] Copying: 65/65 [MB] (average 722 MBps) 00:09:31.482 00:09:31.742 08:09:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:31.742 08:09:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:31.742 08:09:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:31.742 08:09:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:31.742 { 00:09:31.742 "subsystems": [ 00:09:31.742 { 00:09:31.742 "subsystem": "bdev", 00:09:31.742 "config": [ 00:09:31.742 { 00:09:31.742 "params": { 00:09:31.742 "trtype": "pcie", 00:09:31.742 "traddr": "0000:00:10.0", 00:09:31.742 "name": "Nvme0" 00:09:31.742 }, 00:09:31.742 "method": "bdev_nvme_attach_controller" 00:09:31.742 }, 00:09:31.742 { 00:09:31.742 "params": { 00:09:31.742 "trtype": "pcie", 00:09:31.742 "traddr": "0000:00:11.0", 00:09:31.742 "name": "Nvme1" 00:09:31.742 }, 00:09:31.742 "method": "bdev_nvme_attach_controller" 00:09:31.742 }, 00:09:31.742 { 00:09:31.742 "method": "bdev_wait_for_examine" 00:09:31.742 } 00:09:31.742 ] 00:09:31.742 } 00:09:31.742 ] 00:09:31.742 } 00:09:31.742 [2024-11-17 08:09:36.615790] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:31.742 [2024-11-17 08:09:36.615907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63457 ] 00:09:32.002 [2024-11-17 08:09:36.797849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.002 [2024-11-17 08:09:36.943385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.262 [2024-11-17 08:09:37.170875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.521  [2024-11-17T08:09:38.913Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:33.901 00:09:33.901 08:09:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:33.901 08:09:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:33.901 08:09:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:33.901 08:09:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:33.901 08:09:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:33.901 08:09:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:33.901 08:09:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:33.901 { 00:09:33.901 "subsystems": [ 00:09:33.901 { 00:09:33.901 "subsystem": "bdev", 00:09:33.901 "config": [ 00:09:33.901 { 00:09:33.901 "params": { 00:09:33.901 "trtype": "pcie", 00:09:33.901 "traddr": "0000:00:10.0", 00:09:33.901 "name": "Nvme0" 00:09:33.901 }, 00:09:33.901 "method": "bdev_nvme_attach_controller" 00:09:33.901 }, 00:09:33.901 { 00:09:33.901 "params": { 00:09:33.901 "trtype": "pcie", 00:09:33.901 "traddr": "0000:00:11.0", 00:09:33.901 "name": "Nvme1" 00:09:33.901 }, 00:09:33.901 "method": "bdev_nvme_attach_controller" 00:09:33.901 }, 00:09:33.901 { 00:09:33.901 "method": "bdev_wait_for_examine" 00:09:33.901 } 00:09:33.901 ] 00:09:33.901 } 00:09:33.901 ] 00:09:33.901 } 00:09:33.901 [2024-11-17 08:09:38.733695] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:33.901 [2024-11-17 08:09:38.733817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63491 ] 00:09:34.161 [2024-11-17 08:09:38.915956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.161 [2024-11-17 08:09:39.054019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.419 [2024-11-17 08:09:39.269258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.678  [2024-11-17T08:09:40.629Z] Copying: 65/65 [MB] (average 822 MBps) 00:09:35.617 00:09:35.876 08:09:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:35.876 08:09:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:35.876 08:09:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:35.876 08:09:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:35.876 { 00:09:35.876 "subsystems": [ 00:09:35.876 { 00:09:35.876 "subsystem": "bdev", 00:09:35.876 "config": [ 00:09:35.876 { 00:09:35.876 "params": { 00:09:35.876 "trtype": "pcie", 00:09:35.876 "traddr": "0000:00:10.0", 00:09:35.876 "name": "Nvme0" 00:09:35.876 }, 00:09:35.876 "method": "bdev_nvme_attach_controller" 00:09:35.876 }, 00:09:35.876 { 00:09:35.876 "params": { 00:09:35.876 "trtype": "pcie", 00:09:35.876 "traddr": "0000:00:11.0", 00:09:35.876 "name": "Nvme1" 00:09:35.876 }, 00:09:35.876 "method": "bdev_nvme_attach_controller" 00:09:35.876 }, 00:09:35.876 { 00:09:35.876 "method": "bdev_wait_for_examine" 00:09:35.876 } 00:09:35.876 ] 00:09:35.876 } 00:09:35.876 ] 00:09:35.876 } 00:09:35.876 [2024-11-17 08:09:40.723165] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:35.876 [2024-11-17 08:09:40.723290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63523 ] 00:09:36.136 [2024-11-17 08:09:40.902779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.136 [2024-11-17 08:09:41.037186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.395 [2024-11-17 08:09:41.269165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.655  [2024-11-17T08:09:43.047Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:38.035 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:38.035 00:09:38.035 real 0m8.258s 00:09:38.035 user 0m6.763s 00:09:38.035 sys 0m3.128s 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.035 ************************************ 00:09:38.035 END TEST dd_offset_magic 00:09:38.035 ************************************ 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:38.035 08:09:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:38.035 { 00:09:38.035 "subsystems": [ 00:09:38.035 { 00:09:38.035 "subsystem": "bdev", 00:09:38.035 "config": [ 00:09:38.035 { 00:09:38.035 "params": { 00:09:38.035 "trtype": "pcie", 00:09:38.035 "traddr": "0000:00:10.0", 00:09:38.035 "name": "Nvme0" 00:09:38.035 }, 00:09:38.035 "method": "bdev_nvme_attach_controller" 00:09:38.035 }, 00:09:38.035 { 00:09:38.035 "params": { 00:09:38.035 "trtype": "pcie", 00:09:38.035 "traddr": "0000:00:11.0", 00:09:38.035 "name": "Nvme1" 00:09:38.035 }, 00:09:38.035 "method": "bdev_nvme_attach_controller" 00:09:38.035 }, 00:09:38.035 { 00:09:38.035 "method": "bdev_wait_for_examine" 00:09:38.035 } 00:09:38.035 ] 00:09:38.035 } 00:09:38.036 ] 00:09:38.036 } 00:09:38.036 [2024-11-17 08:09:42.904534] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:38.036 [2024-11-17 08:09:42.904821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63572 ] 00:09:38.295 [2024-11-17 08:09:43.084380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.295 [2024-11-17 08:09:43.213499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.555 [2024-11-17 08:09:43.441382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.813  [2024-11-17T08:09:44.763Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:09:39.751 00:09:39.751 08:09:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:39.751 08:09:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:39.751 08:09:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:39.751 08:09:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:39.751 08:09:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:39.751 08:09:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:39.751 08:09:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:39.751 08:09:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:39.751 08:09:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:39.751 08:09:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:39.751 { 00:09:39.751 "subsystems": [ 00:09:39.752 { 00:09:39.752 "subsystem": "bdev", 00:09:39.752 "config": [ 00:09:39.752 { 00:09:39.752 "params": { 00:09:39.752 "trtype": "pcie", 00:09:39.752 "traddr": "0000:00:10.0", 00:09:39.752 "name": "Nvme0" 00:09:39.752 }, 00:09:39.752 "method": "bdev_nvme_attach_controller" 00:09:39.752 }, 00:09:39.752 { 00:09:39.752 "params": { 00:09:39.752 "trtype": "pcie", 00:09:39.752 "traddr": "0000:00:11.0", 00:09:39.752 "name": "Nvme1" 00:09:39.752 }, 00:09:39.752 "method": "bdev_nvme_attach_controller" 00:09:39.752 }, 00:09:39.752 { 00:09:39.752 "method": "bdev_wait_for_examine" 00:09:39.752 } 00:09:39.752 ] 00:09:39.752 } 00:09:39.752 ] 00:09:39.752 } 00:09:40.011 [2024-11-17 08:09:44.780371] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:40.011 [2024-11-17 08:09:44.780531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63605 ] 00:09:40.011 [2024-11-17 08:09:44.964290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.272 [2024-11-17 08:09:45.092020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.532 [2024-11-17 08:09:45.313205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.791  [2024-11-17T08:09:47.182Z] Copying: 5120/5120 [kB] (average 625 MBps) 00:09:42.170 00:09:42.171 08:09:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:42.171 00:09:42.171 real 0m17.735s 00:09:42.171 user 0m14.289s 00:09:42.171 sys 0m9.108s 00:09:42.171 ************************************ 00:09:42.171 END TEST spdk_dd_bdev_to_bdev 00:09:42.171 ************************************ 00:09:42.171 08:09:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.171 08:09:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:42.171 08:09:46 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:42.171 08:09:46 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:42.171 08:09:46 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.171 08:09:46 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.171 08:09:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:42.171 ************************************ 00:09:42.171 START TEST spdk_dd_uring 00:09:42.171 ************************************ 00:09:42.171 08:09:46 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:42.171 * Looking for test storage... 00:09:42.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.171 --rc genhtml_branch_coverage=1 00:09:42.171 --rc genhtml_function_coverage=1 00:09:42.171 --rc genhtml_legend=1 00:09:42.171 --rc geninfo_all_blocks=1 00:09:42.171 --rc geninfo_unexecuted_blocks=1 00:09:42.171 00:09:42.171 ' 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.171 --rc genhtml_branch_coverage=1 00:09:42.171 --rc genhtml_function_coverage=1 00:09:42.171 --rc genhtml_legend=1 00:09:42.171 --rc geninfo_all_blocks=1 00:09:42.171 --rc geninfo_unexecuted_blocks=1 00:09:42.171 00:09:42.171 ' 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.171 --rc genhtml_branch_coverage=1 00:09:42.171 --rc genhtml_function_coverage=1 00:09:42.171 --rc genhtml_legend=1 00:09:42.171 --rc geninfo_all_blocks=1 00:09:42.171 --rc geninfo_unexecuted_blocks=1 00:09:42.171 00:09:42.171 ' 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.171 --rc genhtml_branch_coverage=1 00:09:42.171 --rc genhtml_function_coverage=1 00:09:42.171 --rc genhtml_legend=1 00:09:42.171 --rc geninfo_all_blocks=1 00:09:42.171 --rc geninfo_unexecuted_blocks=1 00:09:42.171 00:09:42.171 ' 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:42.171 ************************************ 00:09:42.171 START TEST dd_uring_copy 00:09:42.171 ************************************ 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:42.171 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:42.172 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:42.172 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:42.172 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:42.172 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:09:42.172 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:42.431 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=dy27k8mi0eqjca2utz3y3hkougxbtytv4wd9pe5gmds6vluf3wo3upfzrf91qq33m83i2n7rbujgfhvd5w0z4fvag5t76diychqjhk2q3o1v9kj935z40wl2l9f40adjm2m5nv5dv8wnoff8epczpr17ld9e9a6085l8gs8036cfgialbexrwa1niyaon8h8n71scdxmay5fifmi4llpnv7fb2bmhjgbkqgwm2uzh6szh0p8jh1j5ko6z650fe52xdf21fpyihz9mula43w53atz6zrty51sgjyn34yyl91ivkjinubo9hrv6l8eowrdybxxvsommsqzdf7cjdjw04fvdfqgfxe50kshkp45f697kales7hs51uvspl7i6grc3k6si015svrbd4jb3gk00hg4ijrz8xupqqs839z5wu4it0vivgznk7s79anx1p6rvbum7xar7a1b2o8w55q417mpvsbdz4b8vzlvd3wlumlr1e4ffvxe0icapgq0ux4d6bfk70o5ix7lmb1c8a8bd9bi7ro3xbrbkaav8o1b3prt4edwggbrb7kjfsrl8c62mg7hh3iqmacettykbo52h8byzavu6f78ln05prr6v9as16b9jja8eivzz1xmi01kniqrlud2mtjbpatyrttd93d1zg4k0klwu8z5mfxsjbkcsoslxife09t0ev19eambwlwkm9uo6ium6ac3f7jey4zak6kdcobrd18ul002mnbfs0k2h6dgy5uamig0jgbeocukabn7l792d6273xyxfqhfkj03e1gyg1d51xcqvrvvuv2nst34on61zevbv803nlungegw6g8hftdf7zez4o4r2qmarvh6e6hwbyxb4r8w6wx6qttozb9f68wlveonevlznzf0k0mzakr6hfbl5q5xtr9gvbuicwn1rtwyeiv1xxs3r9ggycp7bz6k95m1adv4aioe6yul8dpd4m0hojj7pdfn1qbzl05wnx0t65ddnw43gzqv5xslez8envb 00:09:42.431 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo dy27k8mi0eqjca2utz3y3hkougxbtytv4wd9pe5gmds6vluf3wo3upfzrf91qq33m83i2n7rbujgfhvd5w0z4fvag5t76diychqjhk2q3o1v9kj935z40wl2l9f40adjm2m5nv5dv8wnoff8epczpr17ld9e9a6085l8gs8036cfgialbexrwa1niyaon8h8n71scdxmay5fifmi4llpnv7fb2bmhjgbkqgwm2uzh6szh0p8jh1j5ko6z650fe52xdf21fpyihz9mula43w53atz6zrty51sgjyn34yyl91ivkjinubo9hrv6l8eowrdybxxvsommsqzdf7cjdjw04fvdfqgfxe50kshkp45f697kales7hs51uvspl7i6grc3k6si015svrbd4jb3gk00hg4ijrz8xupqqs839z5wu4it0vivgznk7s79anx1p6rvbum7xar7a1b2o8w55q417mpvsbdz4b8vzlvd3wlumlr1e4ffvxe0icapgq0ux4d6bfk70o5ix7lmb1c8a8bd9bi7ro3xbrbkaav8o1b3prt4edwggbrb7kjfsrl8c62mg7hh3iqmacettykbo52h8byzavu6f78ln05prr6v9as16b9jja8eivzz1xmi01kniqrlud2mtjbpatyrttd93d1zg4k0klwu8z5mfxsjbkcsoslxife09t0ev19eambwlwkm9uo6ium6ac3f7jey4zak6kdcobrd18ul002mnbfs0k2h6dgy5uamig0jgbeocukabn7l792d6273xyxfqhfkj03e1gyg1d51xcqvrvvuv2nst34on61zevbv803nlungegw6g8hftdf7zez4o4r2qmarvh6e6hwbyxb4r8w6wx6qttozb9f68wlveonevlznzf0k0mzakr6hfbl5q5xtr9gvbuicwn1rtwyeiv1xxs3r9ggycp7bz6k95m1adv4aioe6yul8dpd4m0hojj7pdfn1qbzl05wnx0t65ddnw43gzqv5xslez8envb 00:09:42.431 08:09:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:42.431 [2024-11-17 08:09:47.292105] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:42.431 [2024-11-17 08:09:47.292375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63701 ] 00:09:42.691 [2024-11-17 08:09:47.471080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.691 [2024-11-17 08:09:47.602653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.951 [2024-11-17 08:09:47.837346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.404  [2024-11-17T08:09:51.953Z] Copying: 511/511 [MB] (average 1147 MBps) 00:09:46.941 00:09:46.941 08:09:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:46.941 08:09:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:46.941 08:09:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:46.941 08:09:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:46.941 { 00:09:46.941 "subsystems": [ 00:09:46.941 { 00:09:46.941 "subsystem": "bdev", 00:09:46.941 "config": [ 00:09:46.941 { 00:09:46.941 "params": { 00:09:46.941 "block_size": 512, 00:09:46.941 "num_blocks": 1048576, 00:09:46.941 "name": "malloc0" 00:09:46.941 }, 00:09:46.941 "method": "bdev_malloc_create" 00:09:46.941 }, 00:09:46.941 { 00:09:46.941 "params": { 00:09:46.941 "filename": "/dev/zram1", 00:09:46.941 "name": "uring0" 00:09:46.941 }, 00:09:46.941 "method": "bdev_uring_create" 00:09:46.941 }, 00:09:46.941 { 00:09:46.941 "method": "bdev_wait_for_examine" 00:09:46.941 } 00:09:46.941 ] 00:09:46.941 } 00:09:46.941 ] 00:09:46.941 } 00:09:46.941 [2024-11-17 08:09:51.849693] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:46.941 [2024-11-17 08:09:51.849809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63751 ] 00:09:47.200 [2024-11-17 08:09:52.030530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.200 [2024-11-17 08:09:52.166760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.460 [2024-11-17 08:09:52.408567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:49.367  [2024-11-17T08:09:55.317Z] Copying: 240/512 [MB] (240 MBps) [2024-11-17T08:09:55.317Z] Copying: 482/512 [MB] (241 MBps) [2024-11-17T08:09:58.609Z] Copying: 512/512 [MB] (average 241 MBps) 00:09:53.597 00:09:53.597 08:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:53.597 08:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:53.597 08:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:53.597 08:09:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:53.597 { 00:09:53.597 "subsystems": [ 00:09:53.597 { 00:09:53.597 "subsystem": "bdev", 00:09:53.597 "config": [ 00:09:53.597 { 00:09:53.597 "params": { 00:09:53.597 "block_size": 512, 00:09:53.597 "num_blocks": 1048576, 00:09:53.597 "name": "malloc0" 00:09:53.597 }, 00:09:53.597 "method": "bdev_malloc_create" 00:09:53.597 }, 00:09:53.597 { 00:09:53.597 "params": { 00:09:53.597 "filename": "/dev/zram1", 00:09:53.597 "name": "uring0" 00:09:53.597 }, 00:09:53.597 "method": "bdev_uring_create" 00:09:53.597 }, 00:09:53.597 { 00:09:53.597 "method": "bdev_wait_for_examine" 00:09:53.597 } 00:09:53.597 ] 00:09:53.597 } 00:09:53.597 ] 00:09:53.597 } 00:09:53.597 [2024-11-17 08:09:58.063042] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:53.597 [2024-11-17 08:09:58.063164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63829 ] 00:09:53.597 [2024-11-17 08:09:58.245248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.597 [2024-11-17 08:09:58.375500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.857 [2024-11-17 08:09:58.609057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.762  [2024-11-17T08:10:01.712Z] Copying: 170/512 [MB] (170 MBps) [2024-11-17T08:10:02.649Z] Copying: 327/512 [MB] (157 MBps) [2024-11-17T08:10:02.649Z] Copying: 476/512 [MB] (149 MBps) [2024-11-17T08:10:05.940Z] Copying: 512/512 [MB] (average 160 MBps) 00:10:00.928 00:10:00.928 08:10:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:10:00.929 08:10:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ dy27k8mi0eqjca2utz3y3hkougxbtytv4wd9pe5gmds6vluf3wo3upfzrf91qq33m83i2n7rbujgfhvd5w0z4fvag5t76diychqjhk2q3o1v9kj935z40wl2l9f40adjm2m5nv5dv8wnoff8epczpr17ld9e9a6085l8gs8036cfgialbexrwa1niyaon8h8n71scdxmay5fifmi4llpnv7fb2bmhjgbkqgwm2uzh6szh0p8jh1j5ko6z650fe52xdf21fpyihz9mula43w53atz6zrty51sgjyn34yyl91ivkjinubo9hrv6l8eowrdybxxvsommsqzdf7cjdjw04fvdfqgfxe50kshkp45f697kales7hs51uvspl7i6grc3k6si015svrbd4jb3gk00hg4ijrz8xupqqs839z5wu4it0vivgznk7s79anx1p6rvbum7xar7a1b2o8w55q417mpvsbdz4b8vzlvd3wlumlr1e4ffvxe0icapgq0ux4d6bfk70o5ix7lmb1c8a8bd9bi7ro3xbrbkaav8o1b3prt4edwggbrb7kjfsrl8c62mg7hh3iqmacettykbo52h8byzavu6f78ln05prr6v9as16b9jja8eivzz1xmi01kniqrlud2mtjbpatyrttd93d1zg4k0klwu8z5mfxsjbkcsoslxife09t0ev19eambwlwkm9uo6ium6ac3f7jey4zak6kdcobrd18ul002mnbfs0k2h6dgy5uamig0jgbeocukabn7l792d6273xyxfqhfkj03e1gyg1d51xcqvrvvuv2nst34on61zevbv803nlungegw6g8hftdf7zez4o4r2qmarvh6e6hwbyxb4r8w6wx6qttozb9f68wlveonevlznzf0k0mzakr6hfbl5q5xtr9gvbuicwn1rtwyeiv1xxs3r9ggycp7bz6k95m1adv4aioe6yul8dpd4m0hojj7pdfn1qbzl05wnx0t65ddnw43gzqv5xslez8envb == \d\y\2\7\k\8\m\i\0\e\q\j\c\a\2\u\t\z\3\y\3\h\k\o\u\g\x\b\t\y\t\v\4\w\d\9\p\e\5\g\m\d\s\6\v\l\u\f\3\w\o\3\u\p\f\z\r\f\9\1\q\q\3\3\m\8\3\i\2\n\7\r\b\u\j\g\f\h\v\d\5\w\0\z\4\f\v\a\g\5\t\7\6\d\i\y\c\h\q\j\h\k\2\q\3\o\1\v\9\k\j\9\3\5\z\4\0\w\l\2\l\9\f\4\0\a\d\j\m\2\m\5\n\v\5\d\v\8\w\n\o\f\f\8\e\p\c\z\p\r\1\7\l\d\9\e\9\a\6\0\8\5\l\8\g\s\8\0\3\6\c\f\g\i\a\l\b\e\x\r\w\a\1\n\i\y\a\o\n\8\h\8\n\7\1\s\c\d\x\m\a\y\5\f\i\f\m\i\4\l\l\p\n\v\7\f\b\2\b\m\h\j\g\b\k\q\g\w\m\2\u\z\h\6\s\z\h\0\p\8\j\h\1\j\5\k\o\6\z\6\5\0\f\e\5\2\x\d\f\2\1\f\p\y\i\h\z\9\m\u\l\a\4\3\w\5\3\a\t\z\6\z\r\t\y\5\1\s\g\j\y\n\3\4\y\y\l\9\1\i\v\k\j\i\n\u\b\o\9\h\r\v\6\l\8\e\o\w\r\d\y\b\x\x\v\s\o\m\m\s\q\z\d\f\7\c\j\d\j\w\0\4\f\v\d\f\q\g\f\x\e\5\0\k\s\h\k\p\4\5\f\6\9\7\k\a\l\e\s\7\h\s\5\1\u\v\s\p\l\7\i\6\g\r\c\3\k\6\s\i\0\1\5\s\v\r\b\d\4\j\b\3\g\k\0\0\h\g\4\i\j\r\z\8\x\u\p\q\q\s\8\3\9\z\5\w\u\4\i\t\0\v\i\v\g\z\n\k\7\s\7\9\a\n\x\1\p\6\r\v\b\u\m\7\x\a\r\7\a\1\b\2\o\8\w\5\5\q\4\1\7\m\p\v\s\b\d\z\4\b\8\v\z\l\v\d\3\w\l\u\m\l\r\1\e\4\f\f\v\x\e\0\i\c\a\p\g\q\0\u\x\4\d\6\b\f\k\7\0\o\5\i\x\7\l\m\b\1\c\8\a\8\b\d\9\b\i\7\r\o\3\x\b\r\b\k\a\a\v\8\o\1\b\3\p\r\t\4\e\d\w\g\g\b\r\b\7\k\j\f\s\r\l\8\c\6\2\m\g\7\h\h\3\i\q\m\a\c\e\t\t\y\k\b\o\5\2\h\8\b\y\z\a\v\u\6\f\7\8\l\n\0\5\p\r\r\6\v\9\a\s\1\6\b\9\j\j\a\8\e\i\v\z\z\1\x\m\i\0\1\k\n\i\q\r\l\u\d\2\m\t\j\b\p\a\t\y\r\t\t\d\9\3\d\1\z\g\4\k\0\k\l\w\u\8\z\5\m\f\x\s\j\b\k\c\s\o\s\l\x\i\f\e\0\9\t\0\e\v\1\9\e\a\m\b\w\l\w\k\m\9\u\o\6\i\u\m\6\a\c\3\f\7\j\e\y\4\z\a\k\6\k\d\c\o\b\r\d\1\8\u\l\0\0\2\m\n\b\f\s\0\k\2\h\6\d\g\y\5\u\a\m\i\g\0\j\g\b\e\o\c\u\k\a\b\n\7\l\7\9\2\d\6\2\7\3\x\y\x\f\q\h\f\k\j\0\3\e\1\g\y\g\1\d\5\1\x\c\q\v\r\v\v\u\v\2\n\s\t\3\4\o\n\6\1\z\e\v\b\v\8\0\3\n\l\u\n\g\e\g\w\6\g\8\h\f\t\d\f\7\z\e\z\4\o\4\r\2\q\m\a\r\v\h\6\e\6\h\w\b\y\x\b\4\r\8\w\6\w\x\6\q\t\t\o\z\b\9\f\6\8\w\l\v\e\o\n\e\v\l\z\n\z\f\0\k\0\m\z\a\k\r\6\h\f\b\l\5\q\5\x\t\r\9\g\v\b\u\i\c\w\n\1\r\t\w\y\e\i\v\1\x\x\s\3\r\9\g\g\y\c\p\7\b\z\6\k\9\5\m\1\a\d\v\4\a\i\o\e\6\y\u\l\8\d\p\d\4\m\0\h\o\j\j\7\p\d\f\n\1\q\b\z\l\0\5\w\n\x\0\t\6\5\d\d\n\w\4\3\g\z\q\v\5\x\s\l\e\z\8\e\n\v\b ]] 00:10:00.929 08:10:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:10:00.929 08:10:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ dy27k8mi0eqjca2utz3y3hkougxbtytv4wd9pe5gmds6vluf3wo3upfzrf91qq33m83i2n7rbujgfhvd5w0z4fvag5t76diychqjhk2q3o1v9kj935z40wl2l9f40adjm2m5nv5dv8wnoff8epczpr17ld9e9a6085l8gs8036cfgialbexrwa1niyaon8h8n71scdxmay5fifmi4llpnv7fb2bmhjgbkqgwm2uzh6szh0p8jh1j5ko6z650fe52xdf21fpyihz9mula43w53atz6zrty51sgjyn34yyl91ivkjinubo9hrv6l8eowrdybxxvsommsqzdf7cjdjw04fvdfqgfxe50kshkp45f697kales7hs51uvspl7i6grc3k6si015svrbd4jb3gk00hg4ijrz8xupqqs839z5wu4it0vivgznk7s79anx1p6rvbum7xar7a1b2o8w55q417mpvsbdz4b8vzlvd3wlumlr1e4ffvxe0icapgq0ux4d6bfk70o5ix7lmb1c8a8bd9bi7ro3xbrbkaav8o1b3prt4edwggbrb7kjfsrl8c62mg7hh3iqmacettykbo52h8byzavu6f78ln05prr6v9as16b9jja8eivzz1xmi01kniqrlud2mtjbpatyrttd93d1zg4k0klwu8z5mfxsjbkcsoslxife09t0ev19eambwlwkm9uo6ium6ac3f7jey4zak6kdcobrd18ul002mnbfs0k2h6dgy5uamig0jgbeocukabn7l792d6273xyxfqhfkj03e1gyg1d51xcqvrvvuv2nst34on61zevbv803nlungegw6g8hftdf7zez4o4r2qmarvh6e6hwbyxb4r8w6wx6qttozb9f68wlveonevlznzf0k0mzakr6hfbl5q5xtr9gvbuicwn1rtwyeiv1xxs3r9ggycp7bz6k95m1adv4aioe6yul8dpd4m0hojj7pdfn1qbzl05wnx0t65ddnw43gzqv5xslez8envb == \d\y\2\7\k\8\m\i\0\e\q\j\c\a\2\u\t\z\3\y\3\h\k\o\u\g\x\b\t\y\t\v\4\w\d\9\p\e\5\g\m\d\s\6\v\l\u\f\3\w\o\3\u\p\f\z\r\f\9\1\q\q\3\3\m\8\3\i\2\n\7\r\b\u\j\g\f\h\v\d\5\w\0\z\4\f\v\a\g\5\t\7\6\d\i\y\c\h\q\j\h\k\2\q\3\o\1\v\9\k\j\9\3\5\z\4\0\w\l\2\l\9\f\4\0\a\d\j\m\2\m\5\n\v\5\d\v\8\w\n\o\f\f\8\e\p\c\z\p\r\1\7\l\d\9\e\9\a\6\0\8\5\l\8\g\s\8\0\3\6\c\f\g\i\a\l\b\e\x\r\w\a\1\n\i\y\a\o\n\8\h\8\n\7\1\s\c\d\x\m\a\y\5\f\i\f\m\i\4\l\l\p\n\v\7\f\b\2\b\m\h\j\g\b\k\q\g\w\m\2\u\z\h\6\s\z\h\0\p\8\j\h\1\j\5\k\o\6\z\6\5\0\f\e\5\2\x\d\f\2\1\f\p\y\i\h\z\9\m\u\l\a\4\3\w\5\3\a\t\z\6\z\r\t\y\5\1\s\g\j\y\n\3\4\y\y\l\9\1\i\v\k\j\i\n\u\b\o\9\h\r\v\6\l\8\e\o\w\r\d\y\b\x\x\v\s\o\m\m\s\q\z\d\f\7\c\j\d\j\w\0\4\f\v\d\f\q\g\f\x\e\5\0\k\s\h\k\p\4\5\f\6\9\7\k\a\l\e\s\7\h\s\5\1\u\v\s\p\l\7\i\6\g\r\c\3\k\6\s\i\0\1\5\s\v\r\b\d\4\j\b\3\g\k\0\0\h\g\4\i\j\r\z\8\x\u\p\q\q\s\8\3\9\z\5\w\u\4\i\t\0\v\i\v\g\z\n\k\7\s\7\9\a\n\x\1\p\6\r\v\b\u\m\7\x\a\r\7\a\1\b\2\o\8\w\5\5\q\4\1\7\m\p\v\s\b\d\z\4\b\8\v\z\l\v\d\3\w\l\u\m\l\r\1\e\4\f\f\v\x\e\0\i\c\a\p\g\q\0\u\x\4\d\6\b\f\k\7\0\o\5\i\x\7\l\m\b\1\c\8\a\8\b\d\9\b\i\7\r\o\3\x\b\r\b\k\a\a\v\8\o\1\b\3\p\r\t\4\e\d\w\g\g\b\r\b\7\k\j\f\s\r\l\8\c\6\2\m\g\7\h\h\3\i\q\m\a\c\e\t\t\y\k\b\o\5\2\h\8\b\y\z\a\v\u\6\f\7\8\l\n\0\5\p\r\r\6\v\9\a\s\1\6\b\9\j\j\a\8\e\i\v\z\z\1\x\m\i\0\1\k\n\i\q\r\l\u\d\2\m\t\j\b\p\a\t\y\r\t\t\d\9\3\d\1\z\g\4\k\0\k\l\w\u\8\z\5\m\f\x\s\j\b\k\c\s\o\s\l\x\i\f\e\0\9\t\0\e\v\1\9\e\a\m\b\w\l\w\k\m\9\u\o\6\i\u\m\6\a\c\3\f\7\j\e\y\4\z\a\k\6\k\d\c\o\b\r\d\1\8\u\l\0\0\2\m\n\b\f\s\0\k\2\h\6\d\g\y\5\u\a\m\i\g\0\j\g\b\e\o\c\u\k\a\b\n\7\l\7\9\2\d\6\2\7\3\x\y\x\f\q\h\f\k\j\0\3\e\1\g\y\g\1\d\5\1\x\c\q\v\r\v\v\u\v\2\n\s\t\3\4\o\n\6\1\z\e\v\b\v\8\0\3\n\l\u\n\g\e\g\w\6\g\8\h\f\t\d\f\7\z\e\z\4\o\4\r\2\q\m\a\r\v\h\6\e\6\h\w\b\y\x\b\4\r\8\w\6\w\x\6\q\t\t\o\z\b\9\f\6\8\w\l\v\e\o\n\e\v\l\z\n\z\f\0\k\0\m\z\a\k\r\6\h\f\b\l\5\q\5\x\t\r\9\g\v\b\u\i\c\w\n\1\r\t\w\y\e\i\v\1\x\x\s\3\r\9\g\g\y\c\p\7\b\z\6\k\9\5\m\1\a\d\v\4\a\i\o\e\6\y\u\l\8\d\p\d\4\m\0\h\o\j\j\7\p\d\f\n\1\q\b\z\l\0\5\w\n\x\0\t\6\5\d\d\n\w\4\3\g\z\q\v\5\x\s\l\e\z\8\e\n\v\b ]] 00:10:00.929 08:10:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:00.929 08:10:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:10:00.929 08:10:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:10:00.929 08:10:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:00.929 08:10:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:00.929 { 00:10:00.929 "subsystems": [ 00:10:00.929 { 00:10:00.929 "subsystem": "bdev", 00:10:00.929 "config": [ 00:10:00.929 { 00:10:00.929 "params": { 00:10:00.929 "block_size": 512, 00:10:00.929 "num_blocks": 1048576, 00:10:00.929 "name": "malloc0" 00:10:00.929 }, 00:10:00.929 "method": "bdev_malloc_create" 00:10:00.929 }, 00:10:00.929 { 00:10:00.929 "params": { 00:10:00.929 "filename": "/dev/zram1", 00:10:00.929 "name": "uring0" 00:10:00.929 }, 00:10:00.929 "method": "bdev_uring_create" 00:10:00.929 }, 00:10:00.929 { 00:10:00.929 "method": "bdev_wait_for_examine" 00:10:00.929 } 00:10:00.929 ] 00:10:00.929 } 00:10:00.929 ] 00:10:00.929 } 00:10:00.929 [2024-11-17 08:10:05.805859] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:00.929 [2024-11-17 08:10:05.806135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63962 ] 00:10:01.189 [2024-11-17 08:10:05.984699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.189 [2024-11-17 08:10:06.118921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.448 [2024-11-17 08:10:06.359423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:03.355  [2024-11-17T08:10:09.306Z] Copying: 184/512 [MB] (184 MBps) [2024-11-17T08:10:10.243Z] Copying: 368/512 [MB] (183 MBps) [2024-11-17T08:10:12.781Z] Copying: 512/512 [MB] (average 184 MBps) 00:10:07.769 00:10:07.769 08:10:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:10:07.769 08:10:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:10:07.769 08:10:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:10:07.769 08:10:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:10:07.769 08:10:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:10:07.769 08:10:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:10:07.769 08:10:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:07.769 08:10:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:07.769 { 00:10:07.769 "subsystems": [ 00:10:07.769 { 00:10:07.769 "subsystem": "bdev", 00:10:07.769 "config": [ 00:10:07.769 { 00:10:07.769 "params": { 00:10:07.769 "block_size": 512, 00:10:07.769 "num_blocks": 1048576, 00:10:07.769 "name": "malloc0" 00:10:07.769 }, 00:10:07.769 "method": "bdev_malloc_create" 00:10:07.769 }, 00:10:07.769 { 00:10:07.769 "params": { 00:10:07.769 "filename": "/dev/zram1", 00:10:07.769 "name": "uring0" 00:10:07.769 }, 00:10:07.769 "method": "bdev_uring_create" 00:10:07.769 }, 00:10:07.769 { 00:10:07.769 "params": { 00:10:07.769 "name": "uring0" 00:10:07.769 }, 00:10:07.769 "method": "bdev_uring_delete" 00:10:07.769 }, 00:10:07.769 { 00:10:07.769 "method": "bdev_wait_for_examine" 00:10:07.769 } 00:10:07.769 ] 00:10:07.769 } 00:10:07.769 ] 00:10:07.769 } 00:10:07.769 [2024-11-17 08:10:12.703996] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:07.769 [2024-11-17 08:10:12.704260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64047 ] 00:10:08.028 [2024-11-17 08:10:12.888272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.028 [2024-11-17 08:10:13.026494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.287 [2024-11-17 08:10:13.267649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.227  [2024-11-17T08:10:16.776Z] Copying: 0/0 [B] (average 0 Bps) 00:10:11.764 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:11.764 08:10:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:11.764 { 00:10:11.764 "subsystems": [ 00:10:11.764 { 00:10:11.764 "subsystem": "bdev", 00:10:11.764 "config": [ 00:10:11.764 { 00:10:11.764 "params": { 00:10:11.764 "block_size": 512, 00:10:11.764 "num_blocks": 1048576, 00:10:11.764 "name": "malloc0" 00:10:11.764 }, 00:10:11.764 "method": "bdev_malloc_create" 00:10:11.764 }, 00:10:11.764 { 00:10:11.764 "params": { 00:10:11.764 "filename": "/dev/zram1", 00:10:11.764 "name": "uring0" 00:10:11.764 }, 00:10:11.764 "method": "bdev_uring_create" 00:10:11.764 }, 00:10:11.764 { 00:10:11.764 "params": { 00:10:11.764 "name": "uring0" 00:10:11.764 }, 00:10:11.764 "method": "bdev_uring_delete" 00:10:11.764 }, 00:10:11.764 { 00:10:11.764 "method": "bdev_wait_for_examine" 00:10:11.764 } 00:10:11.764 ] 00:10:11.764 } 00:10:11.764 ] 00:10:11.764 } 00:10:12.024 [2024-11-17 08:10:16.791179] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:12.025 [2024-11-17 08:10:16.791476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64110 ] 00:10:12.025 [2024-11-17 08:10:16.972232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.284 [2024-11-17 08:10:17.104011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.544 [2024-11-17 08:10:17.339125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:13.113 [2024-11-17 08:10:18.092491] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:10:13.113 [2024-11-17 08:10:18.092766] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:10:13.113 [2024-11-17 08:10:18.092798] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:10:13.113 [2024-11-17 08:10:18.092818] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:15.650 [2024-11-17 08:10:20.524013] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:15.909 08:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:10:15.909 08:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:15.909 08:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:10:15.909 08:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:10:15.909 08:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:10:15.909 08:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:15.909 08:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:10:15.909 08:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:10:15.909 08:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:10:15.909 08:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:10:15.909 08:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:10:15.909 08:10:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:16.169 00:10:16.169 real 0m33.908s 00:10:16.169 user 0m27.414s 00:10:16.169 sys 0m16.655s 00:10:16.169 ************************************ 00:10:16.169 END TEST dd_uring_copy 00:10:16.169 ************************************ 00:10:16.169 08:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.169 08:10:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:16.169 ************************************ 00:10:16.169 END TEST spdk_dd_uring 00:10:16.169 ************************************ 00:10:16.169 00:10:16.169 real 0m34.241s 00:10:16.169 user 0m27.571s 00:10:16.169 sys 0m16.836s 00:10:16.169 08:10:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.169 08:10:21 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:10:16.429 08:10:21 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:10:16.429 08:10:21 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.429 08:10:21 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.429 08:10:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:16.429 ************************************ 00:10:16.429 START TEST spdk_dd_sparse 00:10:16.429 ************************************ 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:10:16.429 * Looking for test storage... 00:10:16.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.429 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:16.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.691 --rc genhtml_branch_coverage=1 00:10:16.691 --rc genhtml_function_coverage=1 00:10:16.691 --rc genhtml_legend=1 00:10:16.691 --rc geninfo_all_blocks=1 00:10:16.691 --rc geninfo_unexecuted_blocks=1 00:10:16.691 00:10:16.691 ' 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:16.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.691 --rc genhtml_branch_coverage=1 00:10:16.691 --rc genhtml_function_coverage=1 00:10:16.691 --rc genhtml_legend=1 00:10:16.691 --rc geninfo_all_blocks=1 00:10:16.691 --rc geninfo_unexecuted_blocks=1 00:10:16.691 00:10:16.691 ' 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:16.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.691 --rc genhtml_branch_coverage=1 00:10:16.691 --rc genhtml_function_coverage=1 00:10:16.691 --rc genhtml_legend=1 00:10:16.691 --rc geninfo_all_blocks=1 00:10:16.691 --rc geninfo_unexecuted_blocks=1 00:10:16.691 00:10:16.691 ' 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:16.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.691 --rc genhtml_branch_coverage=1 00:10:16.691 --rc genhtml_function_coverage=1 00:10:16.691 --rc genhtml_legend=1 00:10:16.691 --rc geninfo_all_blocks=1 00:10:16.691 --rc geninfo_unexecuted_blocks=1 00:10:16.691 00:10:16.691 ' 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.691 08:10:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:10:16.692 1+0 records in 00:10:16.692 1+0 records out 00:10:16.692 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0112541 s, 373 MB/s 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:10:16.692 1+0 records in 00:10:16.692 1+0 records out 00:10:16.692 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0141259 s, 297 MB/s 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:10:16.692 1+0 records in 00:10:16.692 1+0 records out 00:10:16.692 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0101631 s, 413 MB/s 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:16.692 ************************************ 00:10:16.692 START TEST dd_sparse_file_to_file 00:10:16.692 ************************************ 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:16.692 08:10:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:16.692 { 00:10:16.692 "subsystems": [ 00:10:16.692 { 00:10:16.692 "subsystem": "bdev", 00:10:16.692 "config": [ 00:10:16.692 { 00:10:16.692 "params": { 00:10:16.692 "block_size": 4096, 00:10:16.692 "filename": "dd_sparse_aio_disk", 00:10:16.692 "name": "dd_aio" 00:10:16.692 }, 00:10:16.692 "method": "bdev_aio_create" 00:10:16.692 }, 00:10:16.692 { 00:10:16.692 "params": { 00:10:16.692 "lvs_name": "dd_lvstore", 00:10:16.692 "bdev_name": "dd_aio" 00:10:16.692 }, 00:10:16.692 "method": "bdev_lvol_create_lvstore" 00:10:16.692 }, 00:10:16.692 { 00:10:16.692 "method": "bdev_wait_for_examine" 00:10:16.692 } 00:10:16.692 ] 00:10:16.692 } 00:10:16.692 ] 00:10:16.692 } 00:10:16.692 [2024-11-17 08:10:21.642488] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:16.692 [2024-11-17 08:10:21.642612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64245 ] 00:10:16.972 [2024-11-17 08:10:21.827150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.972 [2024-11-17 08:10:21.956572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.264 [2024-11-17 08:10:22.187582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:17.524  [2024-11-17T08:10:23.915Z] Copying: 12/36 [MB] (average 600 MBps) 00:10:18.903 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:18.903 00:10:18.903 real 0m2.212s 00:10:18.903 user 0m1.733s 00:10:18.903 sys 0m1.384s 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:18.903 ************************************ 00:10:18.903 END TEST dd_sparse_file_to_file 00:10:18.903 ************************************ 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:18.903 ************************************ 00:10:18.903 START TEST dd_sparse_file_to_bdev 00:10:18.903 ************************************ 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:18.903 08:10:23 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:18.903 { 00:10:18.903 "subsystems": [ 00:10:18.903 { 00:10:18.903 "subsystem": "bdev", 00:10:18.903 "config": [ 00:10:18.903 { 00:10:18.903 "params": { 00:10:18.903 "block_size": 4096, 00:10:18.903 "filename": "dd_sparse_aio_disk", 00:10:18.903 "name": "dd_aio" 00:10:18.903 }, 00:10:18.903 "method": "bdev_aio_create" 00:10:18.903 }, 00:10:18.903 { 00:10:18.903 "params": { 00:10:18.903 "lvs_name": "dd_lvstore", 00:10:18.903 "lvol_name": "dd_lvol", 00:10:18.903 "size_in_mib": 36, 00:10:18.903 "thin_provision": true 00:10:18.903 }, 00:10:18.903 "method": "bdev_lvol_create" 00:10:18.903 }, 00:10:18.903 { 00:10:18.903 "method": "bdev_wait_for_examine" 00:10:18.903 } 00:10:18.903 ] 00:10:18.903 } 00:10:18.903 ] 00:10:18.903 } 00:10:19.162 [2024-11-17 08:10:23.931627] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:19.162 [2024-11-17 08:10:23.931881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64309 ] 00:10:19.162 [2024-11-17 08:10:24.116010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.421 [2024-11-17 08:10:24.246751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.681 [2024-11-17 08:10:24.476234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:19.940  [2024-11-17T08:10:26.333Z] Copying: 12/36 [MB] (average 461 MBps) 00:10:21.321 00:10:21.321 00:10:21.321 real 0m2.152s 00:10:21.321 user 0m1.712s 00:10:21.321 sys 0m1.332s 00:10:21.322 08:10:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.322 08:10:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:21.322 ************************************ 00:10:21.322 END TEST dd_sparse_file_to_bdev 00:10:21.322 ************************************ 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:21.322 ************************************ 00:10:21.322 START TEST dd_sparse_bdev_to_file 00:10:21.322 ************************************ 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:21.322 08:10:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:21.322 { 00:10:21.322 "subsystems": [ 00:10:21.322 { 00:10:21.322 "subsystem": "bdev", 00:10:21.322 "config": [ 00:10:21.322 { 00:10:21.322 "params": { 00:10:21.322 "block_size": 4096, 00:10:21.322 "filename": "dd_sparse_aio_disk", 00:10:21.322 "name": "dd_aio" 00:10:21.322 }, 00:10:21.322 "method": "bdev_aio_create" 00:10:21.322 }, 00:10:21.322 { 00:10:21.322 "method": "bdev_wait_for_examine" 00:10:21.322 } 00:10:21.322 ] 00:10:21.322 } 00:10:21.322 ] 00:10:21.322 } 00:10:21.322 [2024-11-17 08:10:26.156899] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:21.322 [2024-11-17 08:10:26.157205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64359 ] 00:10:21.582 [2024-11-17 08:10:26.343090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.582 [2024-11-17 08:10:26.478491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.841 [2024-11-17 08:10:26.717326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:22.101  [2024-11-17T08:10:28.493Z] Copying: 12/36 [MB] (average 666 MBps) 00:10:23.481 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:23.481 00:10:23.481 real 0m2.219s 00:10:23.481 user 0m1.760s 00:10:23.481 sys 0m1.356s 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.481 ************************************ 00:10:23.481 END TEST dd_sparse_bdev_to_file 00:10:23.481 ************************************ 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:10:23.481 ************************************ 00:10:23.481 END TEST spdk_dd_sparse 00:10:23.481 ************************************ 00:10:23.481 00:10:23.481 real 0m7.173s 00:10:23.481 user 0m5.438s 00:10:23.481 sys 0m4.426s 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.481 08:10:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:23.481 08:10:28 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:23.481 08:10:28 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.481 08:10:28 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.481 08:10:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:23.482 ************************************ 00:10:23.482 START TEST spdk_dd_negative 00:10:23.482 ************************************ 00:10:23.482 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:23.742 * Looking for test storage... 00:10:23.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:10:23.742 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:23.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.743 --rc genhtml_branch_coverage=1 00:10:23.743 --rc genhtml_function_coverage=1 00:10:23.743 --rc genhtml_legend=1 00:10:23.743 --rc geninfo_all_blocks=1 00:10:23.743 --rc geninfo_unexecuted_blocks=1 00:10:23.743 00:10:23.743 ' 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:23.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.743 --rc genhtml_branch_coverage=1 00:10:23.743 --rc genhtml_function_coverage=1 00:10:23.743 --rc genhtml_legend=1 00:10:23.743 --rc geninfo_all_blocks=1 00:10:23.743 --rc geninfo_unexecuted_blocks=1 00:10:23.743 00:10:23.743 ' 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:23.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.743 --rc genhtml_branch_coverage=1 00:10:23.743 --rc genhtml_function_coverage=1 00:10:23.743 --rc genhtml_legend=1 00:10:23.743 --rc geninfo_all_blocks=1 00:10:23.743 --rc geninfo_unexecuted_blocks=1 00:10:23.743 00:10:23.743 ' 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:23.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.743 --rc genhtml_branch_coverage=1 00:10:23.743 --rc genhtml_function_coverage=1 00:10:23.743 --rc genhtml_legend=1 00:10:23.743 --rc geninfo_all_blocks=1 00:10:23.743 --rc geninfo_unexecuted_blocks=1 00:10:23.743 00:10:23.743 ' 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:23.743 ************************************ 00:10:23.743 START TEST dd_invalid_arguments 00:10:23.743 ************************************ 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:23.743 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:24.004 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:10:24.004 00:10:24.004 CPU options: 00:10:24.004 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:10:24.004 (like [0,1,10]) 00:10:24.004 --lcores lcore to CPU mapping list. The list is in the format: 00:10:24.004 [<,lcores[@CPUs]>...] 00:10:24.004 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:10:24.004 Within the group, '-' is used for range separator, 00:10:24.004 ',' is used for single number separator. 00:10:24.004 '( )' can be omitted for single element group, 00:10:24.004 '@' can be omitted if cpus and lcores have the same value 00:10:24.004 --disable-cpumask-locks Disable CPU core lock files. 00:10:24.004 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:10:24.004 pollers in the app support interrupt mode) 00:10:24.004 -p, --main-core main (primary) core for DPDK 00:10:24.004 00:10:24.004 Configuration options: 00:10:24.004 -c, --config, --json JSON config file 00:10:24.004 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:10:24.004 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:10:24.004 --wait-for-rpc wait for RPCs to initialize subsystems 00:10:24.004 --rpcs-allowed comma-separated list of permitted RPCS 00:10:24.004 --json-ignore-init-errors don't exit on invalid config entry 00:10:24.004 00:10:24.004 Memory options: 00:10:24.004 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:10:24.004 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:10:24.004 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:10:24.004 -R, --huge-unlink unlink huge files after initialization 00:10:24.004 -n, --mem-channels number of memory channels used for DPDK 00:10:24.004 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:10:24.004 --msg-mempool-size global message memory pool size in count (default: 262143) 00:10:24.004 --no-huge run without using hugepages 00:10:24.004 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:10:24.004 -i, --shm-id shared memory ID (optional) 00:10:24.004 -g, --single-file-segments force creating just one hugetlbfs file 00:10:24.004 00:10:24.004 PCI options: 00:10:24.004 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:10:24.004 -B, --pci-blocked pci addr to block (can be used more than once) 00:10:24.004 -u, --no-pci disable PCI access 00:10:24.005 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:10:24.005 00:10:24.005 Log options: 00:10:24.005 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:10:24.005 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:10:24.005 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:10:24.005 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:10:24.005 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:10:24.005 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:10:24.005 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:10:24.005 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:10:24.005 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:10:24.005 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:10:24.005 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:10:24.005 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:10:24.005 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:10:24.005 --silence-noticelog disable notice level logging to stderr 00:10:24.005 00:10:24.005 Trace options: 00:10:24.005 --num-trace-entries number of trace entries for each core, must be power of 2, 00:10:24.005 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:10:24.005 [2024-11-17 08:10:28.831259] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:10:24.005 setting 0 to disable trace (default 32768) 00:10:24.005 Tracepoints vary in size and can use more than one trace entry. 00:10:24.005 -e, --tpoint-group [:] 00:10:24.005 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:10:24.005 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:10:24.005 blob, bdev_raid, scheduler, all). 00:10:24.005 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:10:24.005 a tracepoint group. First tpoint inside a group can be enabled by 00:10:24.005 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:10:24.005 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:10:24.005 in /include/spdk_internal/trace_defs.h 00:10:24.005 00:10:24.005 Other options: 00:10:24.005 -h, --help show this usage 00:10:24.005 -v, --version print SPDK version 00:10:24.005 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:10:24.005 --env-context Opaque context for use of the env implementation 00:10:24.005 00:10:24.005 Application specific: 00:10:24.005 [--------- DD Options ---------] 00:10:24.005 --if Input file. Must specify either --if or --ib. 00:10:24.005 --ib Input bdev. Must specifier either --if or --ib 00:10:24.005 --of Output file. Must specify either --of or --ob. 00:10:24.005 --ob Output bdev. Must specify either --of or --ob. 00:10:24.005 --iflag Input file flags. 00:10:24.005 --oflag Output file flags. 00:10:24.005 --bs I/O unit size (default: 4096) 00:10:24.005 --qd Queue depth (default: 2) 00:10:24.005 --count I/O unit count. The number of I/O units to copy. (default: all) 00:10:24.005 --skip Skip this many I/O units at start of input. (default: 0) 00:10:24.005 --seek Skip this many I/O units at start of output. (default: 0) 00:10:24.005 --aio Force usage of AIO. (by default io_uring is used if available) 00:10:24.005 --sparse Enable hole skipping in input target 00:10:24.005 Available iflag and oflag values: 00:10:24.005 append - append mode 00:10:24.005 direct - use direct I/O for data 00:10:24.005 directory - fail unless a directory 00:10:24.005 dsync - use synchronized I/O for data 00:10:24.005 noatime - do not update access time 00:10:24.005 noctty - do not assign controlling terminal from file 00:10:24.005 nofollow - do not follow symlinks 00:10:24.005 nonblock - use non-blocking I/O 00:10:24.005 sync - use synchronized I/O for data and metadata 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:24.005 ************************************ 00:10:24.005 END TEST dd_invalid_arguments 00:10:24.005 ************************************ 00:10:24.005 00:10:24.005 real 0m0.168s 00:10:24.005 user 0m0.081s 00:10:24.005 sys 0m0.085s 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:24.005 ************************************ 00:10:24.005 START TEST dd_double_input 00:10:24.005 ************************************ 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:24.005 08:10:28 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:10:24.265 [2024-11-17 08:10:29.075404] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:24.265 00:10:24.265 real 0m0.177s 00:10:24.265 user 0m0.088s 00:10:24.265 sys 0m0.086s 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.265 ************************************ 00:10:24.265 END TEST dd_double_input 00:10:24.265 ************************************ 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:24.265 ************************************ 00:10:24.265 START TEST dd_double_output 00:10:24.265 ************************************ 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:24.265 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:24.525 [2024-11-17 08:10:29.331158] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:10:24.525 ************************************ 00:10:24.525 END TEST dd_double_output 00:10:24.525 ************************************ 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:24.525 00:10:24.525 real 0m0.185s 00:10:24.525 user 0m0.084s 00:10:24.525 sys 0m0.100s 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:24.525 ************************************ 00:10:24.525 START TEST dd_no_input 00:10:24.525 ************************************ 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:24.525 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:24.785 [2024-11-17 08:10:29.594292] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:10:24.785 ************************************ 00:10:24.785 END TEST dd_no_input 00:10:24.785 ************************************ 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:24.785 00:10:24.785 real 0m0.176s 00:10:24.785 user 0m0.094s 00:10:24.785 sys 0m0.080s 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:24.785 ************************************ 00:10:24.785 START TEST dd_no_output 00:10:24.785 ************************************ 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:24.785 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:25.045 [2024-11-17 08:10:29.835966] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:25.045 00:10:25.045 real 0m0.177s 00:10:25.045 user 0m0.092s 00:10:25.045 sys 0m0.083s 00:10:25.045 ************************************ 00:10:25.045 END TEST dd_no_output 00:10:25.045 ************************************ 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:25.045 ************************************ 00:10:25.045 START TEST dd_wrong_blocksize 00:10:25.045 ************************************ 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.045 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.046 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.046 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:25.046 08:10:29 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:25.306 [2024-11-17 08:10:30.099318] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:25.306 00:10:25.306 real 0m0.183s 00:10:25.306 user 0m0.087s 00:10:25.306 sys 0m0.094s 00:10:25.306 ************************************ 00:10:25.306 END TEST dd_wrong_blocksize 00:10:25.306 ************************************ 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:25.306 ************************************ 00:10:25.306 START TEST dd_smaller_blocksize 00:10:25.306 ************************************ 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:25.306 08:10:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:25.566 [2024-11-17 08:10:30.359076] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:25.566 [2024-11-17 08:10:30.359220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64615 ] 00:10:25.566 [2024-11-17 08:10:30.546899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.826 [2024-11-17 08:10:30.681465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.086 [2024-11-17 08:10:30.916955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:26.653 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:10:26.912 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:10:27.171 [2024-11-17 08:10:31.927944] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:10:27.171 [2024-11-17 08:10:31.928041] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:28.109 [2024-11-17 08:10:32.834279] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:28.109 08:10:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:10:28.109 08:10:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:28.109 08:10:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:10:28.109 08:10:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:10:28.109 08:10:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:10:28.109 08:10:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:28.109 00:10:28.109 real 0m2.867s 00:10:28.109 user 0m1.744s 00:10:28.109 sys 0m1.009s 00:10:28.109 ************************************ 00:10:28.109 END TEST dd_smaller_blocksize 00:10:28.109 ************************************ 00:10:28.109 08:10:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.109 08:10:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:28.369 ************************************ 00:10:28.369 START TEST dd_invalid_count 00:10:28.369 ************************************ 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:28.369 [2024-11-17 08:10:33.301034] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:28.369 00:10:28.369 real 0m0.185s 00:10:28.369 user 0m0.089s 00:10:28.369 sys 0m0.093s 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.369 ************************************ 00:10:28.369 END TEST dd_invalid_count 00:10:28.369 ************************************ 00:10:28.369 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:28.629 ************************************ 00:10:28.629 START TEST dd_invalid_oflag 00:10:28.629 ************************************ 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:28.629 [2024-11-17 08:10:33.550233] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:28.629 00:10:28.629 real 0m0.175s 00:10:28.629 user 0m0.084s 00:10:28.629 sys 0m0.088s 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.629 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:10:28.629 ************************************ 00:10:28.629 END TEST dd_invalid_oflag 00:10:28.629 ************************************ 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:28.888 ************************************ 00:10:28.888 START TEST dd_invalid_iflag 00:10:28.888 ************************************ 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:28.888 [2024-11-17 08:10:33.797631] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:28.888 00:10:28.888 real 0m0.167s 00:10:28.888 user 0m0.078s 00:10:28.888 sys 0m0.087s 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.888 08:10:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:10:28.888 ************************************ 00:10:28.888 END TEST dd_invalid_iflag 00:10:28.888 ************************************ 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:29.148 ************************************ 00:10:29.148 START TEST dd_unknown_flag 00:10:29.148 ************************************ 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:29.148 08:10:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:29.148 [2024-11-17 08:10:34.043757] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:29.149 [2024-11-17 08:10:34.043874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64744 ] 00:10:29.408 [2024-11-17 08:10:34.220736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.408 [2024-11-17 08:10:34.351374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.668 [2024-11-17 08:10:34.587014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:29.928 [2024-11-17 08:10:34.725869] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:29.928 [2024-11-17 08:10:34.725937] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:29.928 [2024-11-17 08:10:34.726004] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:29.928 [2024-11-17 08:10:34.726024] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:29.928 [2024-11-17 08:10:34.726252] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:10:29.928 [2024-11-17 08:10:34.726283] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:29.928 [2024-11-17 08:10:34.726354] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:29.928 [2024-11-17 08:10:34.726369] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:30.867 [2024-11-17 08:10:35.616983] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:31.129 00:10:31.129 real 0m1.967s 00:10:31.129 user 0m1.546s 00:10:31.129 sys 0m0.318s 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:10:31.129 ************************************ 00:10:31.129 END TEST dd_unknown_flag 00:10:31.129 ************************************ 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:31.129 ************************************ 00:10:31.129 START TEST dd_invalid_json 00:10:31.129 ************************************ 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:31.129 08:10:35 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:31.129 [2024-11-17 08:10:36.099944] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:31.129 [2024-11-17 08:10:36.100066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64790 ] 00:10:31.389 [2024-11-17 08:10:36.285746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.649 [2024-11-17 08:10:36.419201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.649 [2024-11-17 08:10:36.419296] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:10:31.649 [2024-11-17 08:10:36.419315] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:31.649 [2024-11-17 08:10:36.419331] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:31.649 [2024-11-17 08:10:36.419395] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:31.909 00:10:31.909 real 0m0.696s 00:10:31.909 user 0m0.404s 00:10:31.909 sys 0m0.190s 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:10:31.909 ************************************ 00:10:31.909 END TEST dd_invalid_json 00:10:31.909 ************************************ 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:31.909 ************************************ 00:10:31.909 START TEST dd_invalid_seek 00:10:31.909 ************************************ 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:31.909 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:31.910 08:10:36 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:10:31.910 { 00:10:31.910 "subsystems": [ 00:10:31.910 { 00:10:31.910 "subsystem": "bdev", 00:10:31.910 "config": [ 00:10:31.910 { 00:10:31.910 "params": { 00:10:31.910 "block_size": 512, 00:10:31.910 "num_blocks": 512, 00:10:31.910 "name": "malloc0" 00:10:31.910 }, 00:10:31.910 "method": "bdev_malloc_create" 00:10:31.910 }, 00:10:31.910 { 00:10:31.910 "params": { 00:10:31.910 "block_size": 512, 00:10:31.910 "num_blocks": 512, 00:10:31.910 "name": "malloc1" 00:10:31.910 }, 00:10:31.910 "method": "bdev_malloc_create" 00:10:31.910 }, 00:10:31.910 { 00:10:31.910 "method": "bdev_wait_for_examine" 00:10:31.910 } 00:10:31.910 ] 00:10:31.910 } 00:10:31.910 ] 00:10:31.910 } 00:10:31.910 [2024-11-17 08:10:36.867480] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:31.910 [2024-11-17 08:10:36.867595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64817 ] 00:10:32.169 [2024-11-17 08:10:37.046332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.169 [2024-11-17 08:10:37.175821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.429 [2024-11-17 08:10:37.417179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:32.688 [2024-11-17 08:10:37.578819] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:10:32.688 [2024-11-17 08:10:37.578893] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:33.627 [2024-11-17 08:10:38.500988] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:10:33.887 ************************************ 00:10:33.887 END TEST dd_invalid_seek 00:10:33.887 ************************************ 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:33.887 00:10:33.887 real 0m2.020s 00:10:33.887 user 0m1.628s 00:10:33.887 sys 0m0.346s 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:33.887 ************************************ 00:10:33.887 START TEST dd_invalid_skip 00:10:33.887 ************************************ 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:33.887 08:10:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:34.146 { 00:10:34.146 "subsystems": [ 00:10:34.146 { 00:10:34.146 "subsystem": "bdev", 00:10:34.146 "config": [ 00:10:34.146 { 00:10:34.146 "params": { 00:10:34.146 "block_size": 512, 00:10:34.146 "num_blocks": 512, 00:10:34.146 "name": "malloc0" 00:10:34.146 }, 00:10:34.146 "method": "bdev_malloc_create" 00:10:34.146 }, 00:10:34.146 { 00:10:34.146 "params": { 00:10:34.146 "block_size": 512, 00:10:34.146 "num_blocks": 512, 00:10:34.146 "name": "malloc1" 00:10:34.146 }, 00:10:34.146 "method": "bdev_malloc_create" 00:10:34.146 }, 00:10:34.146 { 00:10:34.146 "method": "bdev_wait_for_examine" 00:10:34.146 } 00:10:34.146 ] 00:10:34.146 } 00:10:34.146 ] 00:10:34.146 } 00:10:34.146 [2024-11-17 08:10:38.973761] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:34.146 [2024-11-17 08:10:38.973896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64872 ] 00:10:34.406 [2024-11-17 08:10:39.156790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.406 [2024-11-17 08:10:39.285680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.665 [2024-11-17 08:10:39.534814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:34.925 [2024-11-17 08:10:39.699659] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:10:34.925 [2024-11-17 08:10:39.699954] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:35.862 [2024-11-17 08:10:40.616438] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:36.121 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:10:36.121 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:36.121 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:10:36.121 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:10:36.121 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:10:36.121 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:36.122 00:10:36.122 real 0m2.037s 00:10:36.122 user 0m1.629s 00:10:36.122 sys 0m0.363s 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:10:36.122 ************************************ 00:10:36.122 END TEST dd_invalid_skip 00:10:36.122 ************************************ 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:36.122 ************************************ 00:10:36.122 START TEST dd_invalid_input_count 00:10:36.122 ************************************ 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:36.122 08:10:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:36.122 { 00:10:36.122 "subsystems": [ 00:10:36.122 { 00:10:36.122 "subsystem": "bdev", 00:10:36.122 "config": [ 00:10:36.122 { 00:10:36.122 "params": { 00:10:36.122 "block_size": 512, 00:10:36.122 "num_blocks": 512, 00:10:36.122 "name": "malloc0" 00:10:36.122 }, 00:10:36.122 "method": "bdev_malloc_create" 00:10:36.122 }, 00:10:36.122 { 00:10:36.122 "params": { 00:10:36.122 "block_size": 512, 00:10:36.122 "num_blocks": 512, 00:10:36.122 "name": "malloc1" 00:10:36.122 }, 00:10:36.122 "method": "bdev_malloc_create" 00:10:36.122 }, 00:10:36.122 { 00:10:36.122 "method": "bdev_wait_for_examine" 00:10:36.122 } 00:10:36.122 ] 00:10:36.122 } 00:10:36.122 ] 00:10:36.122 } 00:10:36.122 [2024-11-17 08:10:41.088406] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:36.122 [2024-11-17 08:10:41.088566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64923 ] 00:10:36.381 [2024-11-17 08:10:41.274626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.640 [2024-11-17 08:10:41.415068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.640 [2024-11-17 08:10:41.648932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:36.900 [2024-11-17 08:10:41.820488] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:10:36.900 [2024-11-17 08:10:41.820562] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:37.839 [2024-11-17 08:10:42.733816] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:38.098 00:10:38.098 real 0m2.046s 00:10:38.098 user 0m1.655s 00:10:38.098 sys 0m0.349s 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.098 ************************************ 00:10:38.098 END TEST dd_invalid_input_count 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:10:38.098 ************************************ 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:38.098 ************************************ 00:10:38.098 START TEST dd_invalid_output_count 00:10:38.098 ************************************ 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.098 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:38.358 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.358 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:38.358 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:38.358 08:10:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:38.358 { 00:10:38.359 "subsystems": [ 00:10:38.359 { 00:10:38.359 "subsystem": "bdev", 00:10:38.359 "config": [ 00:10:38.359 { 00:10:38.359 "params": { 00:10:38.359 "block_size": 512, 00:10:38.359 "num_blocks": 512, 00:10:38.359 "name": "malloc0" 00:10:38.359 }, 00:10:38.359 "method": "bdev_malloc_create" 00:10:38.359 }, 00:10:38.359 { 00:10:38.359 "method": "bdev_wait_for_examine" 00:10:38.359 } 00:10:38.359 ] 00:10:38.359 } 00:10:38.359 ] 00:10:38.359 } 00:10:38.359 [2024-11-17 08:10:43.220760] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:38.359 [2024-11-17 08:10:43.221061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64976 ] 00:10:38.619 [2024-11-17 08:10:43.406123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.619 [2024-11-17 08:10:43.548212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.878 [2024-11-17 08:10:43.786293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.138 [2024-11-17 08:10:43.941016] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:10:39.138 [2024-11-17 08:10:43.941322] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:40.077 [2024-11-17 08:10:44.847988] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:40.337 00:10:40.337 real 0m2.032s 00:10:40.337 user 0m1.620s 00:10:40.337 sys 0m0.356s 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.337 ************************************ 00:10:40.337 END TEST dd_invalid_output_count 00:10:40.337 ************************************ 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 ************************************ 00:10:40.337 START TEST dd_bs_not_multiple 00:10:40.337 ************************************ 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:40.337 08:10:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:40.337 { 00:10:40.337 "subsystems": [ 00:10:40.337 { 00:10:40.337 "subsystem": "bdev", 00:10:40.337 "config": [ 00:10:40.337 { 00:10:40.337 "params": { 00:10:40.337 "block_size": 512, 00:10:40.337 "num_blocks": 512, 00:10:40.337 "name": "malloc0" 00:10:40.337 }, 00:10:40.337 "method": "bdev_malloc_create" 00:10:40.337 }, 00:10:40.337 { 00:10:40.337 "params": { 00:10:40.337 "block_size": 512, 00:10:40.337 "num_blocks": 512, 00:10:40.337 "name": "malloc1" 00:10:40.337 }, 00:10:40.337 "method": "bdev_malloc_create" 00:10:40.337 }, 00:10:40.337 { 00:10:40.337 "method": "bdev_wait_for_examine" 00:10:40.338 } 00:10:40.338 ] 00:10:40.338 } 00:10:40.338 ] 00:10:40.338 } 00:10:40.338 [2024-11-17 08:10:45.330847] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:40.338 [2024-11-17 08:10:45.330976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65023 ] 00:10:40.598 [2024-11-17 08:10:45.516072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.857 [2024-11-17 08:10:45.648427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.117 [2024-11-17 08:10:45.895973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.117 [2024-11-17 08:10:46.061268] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:10:41.117 [2024-11-17 08:10:46.061350] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:42.134 [2024-11-17 08:10:46.981655] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:42.404 08:10:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:10:42.404 08:10:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:42.404 08:10:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:10:42.404 08:10:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:10:42.404 08:10:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:10:42.404 08:10:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:42.404 00:10:42.404 real 0m2.058s 00:10:42.404 user 0m1.670s 00:10:42.404 sys 0m0.346s 00:10:42.404 08:10:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.404 ************************************ 00:10:42.404 END TEST dd_bs_not_multiple 00:10:42.404 ************************************ 00:10:42.404 08:10:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:10:42.404 00:10:42.404 real 0m18.878s 00:10:42.404 user 0m13.191s 00:10:42.404 sys 0m5.079s 00:10:42.404 08:10:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.404 08:10:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:42.404 ************************************ 00:10:42.404 END TEST spdk_dd_negative 00:10:42.404 ************************************ 00:10:42.404 00:10:42.404 real 3m30.982s 00:10:42.404 user 2m44.330s 00:10:42.404 sys 1m25.498s 00:10:42.404 08:10:47 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.404 ************************************ 00:10:42.404 08:10:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:42.404 END TEST spdk_dd 00:10:42.404 ************************************ 00:10:42.663 08:10:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:42.663 08:10:47 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:42.663 08:10:47 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:42.663 08:10:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.663 08:10:47 -- common/autotest_common.sh@10 -- # set +x 00:10:42.663 08:10:47 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:42.663 08:10:47 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:10:42.663 08:10:47 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:10:42.663 08:10:47 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:10:42.663 08:10:47 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:10:42.663 08:10:47 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:10:42.663 08:10:47 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:42.663 08:10:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:42.663 08:10:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.663 08:10:47 -- common/autotest_common.sh@10 -- # set +x 00:10:42.663 ************************************ 00:10:42.663 START TEST nvmf_tcp 00:10:42.663 ************************************ 00:10:42.663 08:10:47 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:42.664 * Looking for test storage... 00:10:42.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:42.664 08:10:47 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.664 08:10:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.664 08:10:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.923 08:10:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:42.923 08:10:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:10:42.924 08:10:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.924 08:10:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:10:42.924 08:10:47 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.924 08:10:47 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:42.924 08:10:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:10:42.924 08:10:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.924 08:10:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:10:42.924 08:10:47 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.924 08:10:47 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.924 08:10:47 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.924 08:10:47 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:10:42.924 08:10:47 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.924 08:10:47 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.924 --rc genhtml_branch_coverage=1 00:10:42.924 --rc genhtml_function_coverage=1 00:10:42.924 --rc genhtml_legend=1 00:10:42.924 --rc geninfo_all_blocks=1 00:10:42.924 --rc geninfo_unexecuted_blocks=1 00:10:42.924 00:10:42.924 ' 00:10:42.924 08:10:47 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.924 --rc genhtml_branch_coverage=1 00:10:42.924 --rc genhtml_function_coverage=1 00:10:42.924 --rc genhtml_legend=1 00:10:42.924 --rc geninfo_all_blocks=1 00:10:42.924 --rc geninfo_unexecuted_blocks=1 00:10:42.924 00:10:42.924 ' 00:10:42.924 08:10:47 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.924 --rc genhtml_branch_coverage=1 00:10:42.924 --rc genhtml_function_coverage=1 00:10:42.924 --rc genhtml_legend=1 00:10:42.924 --rc geninfo_all_blocks=1 00:10:42.924 --rc geninfo_unexecuted_blocks=1 00:10:42.924 00:10:42.924 ' 00:10:42.924 08:10:47 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.924 --rc genhtml_branch_coverage=1 00:10:42.924 --rc genhtml_function_coverage=1 00:10:42.924 --rc genhtml_legend=1 00:10:42.924 --rc geninfo_all_blocks=1 00:10:42.924 --rc geninfo_unexecuted_blocks=1 00:10:42.924 00:10:42.924 ' 00:10:42.924 08:10:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:42.924 08:10:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:42.924 08:10:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:42.924 08:10:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:42.924 08:10:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.924 08:10:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.924 ************************************ 00:10:42.924 START TEST nvmf_target_core 00:10:42.924 ************************************ 00:10:42.924 08:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:42.924 * Looking for test storage... 00:10:42.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:42.924 08:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.924 08:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.924 08:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.184 08:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.184 --rc genhtml_branch_coverage=1 00:10:43.184 --rc genhtml_function_coverage=1 00:10:43.184 --rc genhtml_legend=1 00:10:43.184 --rc geninfo_all_blocks=1 00:10:43.185 --rc geninfo_unexecuted_blocks=1 00:10:43.185 00:10:43.185 ' 00:10:43.185 08:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.185 --rc genhtml_branch_coverage=1 00:10:43.185 --rc genhtml_function_coverage=1 00:10:43.185 --rc genhtml_legend=1 00:10:43.185 --rc geninfo_all_blocks=1 00:10:43.185 --rc geninfo_unexecuted_blocks=1 00:10:43.185 00:10:43.185 ' 00:10:43.185 08:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.185 --rc genhtml_branch_coverage=1 00:10:43.185 --rc genhtml_function_coverage=1 00:10:43.185 --rc genhtml_legend=1 00:10:43.185 --rc geninfo_all_blocks=1 00:10:43.185 --rc geninfo_unexecuted_blocks=1 00:10:43.185 00:10:43.185 ' 00:10:43.185 08:10:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.185 --rc genhtml_branch_coverage=1 00:10:43.185 --rc genhtml_function_coverage=1 00:10:43.185 --rc genhtml_legend=1 00:10:43.185 --rc geninfo_all_blocks=1 00:10:43.185 --rc geninfo_unexecuted_blocks=1 00:10:43.185 00:10:43.185 ' 00:10:43.185 08:10:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.185 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.185 ************************************ 00:10:43.185 START TEST nvmf_host_management 00:10:43.185 ************************************ 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:43.185 * Looking for test storage... 00:10:43.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.185 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.447 --rc genhtml_branch_coverage=1 00:10:43.447 --rc genhtml_function_coverage=1 00:10:43.447 --rc genhtml_legend=1 00:10:43.447 --rc geninfo_all_blocks=1 00:10:43.447 --rc geninfo_unexecuted_blocks=1 00:10:43.447 00:10:43.447 ' 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.447 --rc genhtml_branch_coverage=1 00:10:43.447 --rc genhtml_function_coverage=1 00:10:43.447 --rc genhtml_legend=1 00:10:43.447 --rc geninfo_all_blocks=1 00:10:43.447 --rc geninfo_unexecuted_blocks=1 00:10:43.447 00:10:43.447 ' 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.447 --rc genhtml_branch_coverage=1 00:10:43.447 --rc genhtml_function_coverage=1 00:10:43.447 --rc genhtml_legend=1 00:10:43.447 --rc geninfo_all_blocks=1 00:10:43.447 --rc geninfo_unexecuted_blocks=1 00:10:43.447 00:10:43.447 ' 00:10:43.447 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.447 --rc genhtml_branch_coverage=1 00:10:43.447 --rc genhtml_function_coverage=1 00:10:43.447 --rc genhtml_legend=1 00:10:43.447 --rc geninfo_all_blocks=1 00:10:43.447 --rc geninfo_unexecuted_blocks=1 00:10:43.447 00:10:43.447 ' 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.448 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:43.448 Cannot find device "nvmf_init_br" 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:43.448 Cannot find device "nvmf_init_br2" 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:43.448 Cannot find device "nvmf_tgt_br" 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:43.448 Cannot find device "nvmf_tgt_br2" 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:10:43.448 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:43.448 Cannot find device "nvmf_init_br" 00:10:43.449 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:10:43.449 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:43.708 Cannot find device "nvmf_init_br2" 00:10:43.708 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:10:43.708 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:43.708 Cannot find device "nvmf_tgt_br" 00:10:43.708 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:10:43.708 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:43.708 Cannot find device "nvmf_tgt_br2" 00:10:43.708 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:10:43.708 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:43.708 Cannot find device "nvmf_br" 00:10:43.708 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:10:43.708 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:43.708 Cannot find device "nvmf_init_if" 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:43.709 Cannot find device "nvmf_init_if2" 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:43.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:43.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:43.709 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:43.969 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:44.229 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:44.229 08:10:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:44.229 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:44.229 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:44.229 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:44.229 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:44.229 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.194 ms 00:10:44.229 00:10:44.229 --- 10.0.0.3 ping statistics --- 00:10:44.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.229 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:44.229 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:44.229 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:44.229 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.119 ms 00:10:44.229 00:10:44.229 --- 10.0.0.4 ping statistics --- 00:10:44.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.229 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:44.229 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:44.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:10:44.229 00:10:44.229 --- 10.0.0.1 ping statistics --- 00:10:44.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.230 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:44.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:10:44.230 00:10:44.230 --- 10.0.0.2 ping statistics --- 00:10:44.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.230 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65385 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65385 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65385 ']' 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.230 08:10:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:44.230 [2024-11-17 08:10:49.195079] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:44.230 [2024-11-17 08:10:49.195191] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.488 [2024-11-17 08:10:49.375506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.747 [2024-11-17 08:10:49.514697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.747 [2024-11-17 08:10:49.514756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.747 [2024-11-17 08:10:49.514773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.747 [2024-11-17 08:10:49.514801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.747 [2024-11-17 08:10:49.514815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.747 [2024-11-17 08:10:49.517224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.747 [2024-11-17 08:10:49.517328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.747 [2024-11-17 08:10:49.517347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:44.747 [2024-11-17 08:10:49.517346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.007 [2024-11-17 08:10:49.795244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:45.007 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.007 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:45.008 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.008 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.008 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:45.268 [2024-11-17 08:10:50.061545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:45.268 Malloc0 00:10:45.268 [2024-11-17 08:10:50.217889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.268 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65445 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65445 /var/tmp/bdevperf.sock 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65445 ']' 00:10:45.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:45.528 { 00:10:45.528 "params": { 00:10:45.528 "name": "Nvme$subsystem", 00:10:45.528 "trtype": "$TEST_TRANSPORT", 00:10:45.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:45.528 "adrfam": "ipv4", 00:10:45.528 "trsvcid": "$NVMF_PORT", 00:10:45.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:45.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:45.528 "hdgst": ${hdgst:-false}, 00:10:45.528 "ddgst": ${ddgst:-false} 00:10:45.528 }, 00:10:45.528 "method": "bdev_nvme_attach_controller" 00:10:45.528 } 00:10:45.528 EOF 00:10:45.528 )") 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:45.528 08:10:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:45.528 "params": { 00:10:45.528 "name": "Nvme0", 00:10:45.528 "trtype": "tcp", 00:10:45.528 "traddr": "10.0.0.3", 00:10:45.528 "adrfam": "ipv4", 00:10:45.528 "trsvcid": "4420", 00:10:45.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:45.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:45.528 "hdgst": false, 00:10:45.528 "ddgst": false 00:10:45.528 }, 00:10:45.528 "method": "bdev_nvme_attach_controller" 00:10:45.528 }' 00:10:45.528 [2024-11-17 08:10:50.392175] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:45.528 [2024-11-17 08:10:50.392297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65445 ] 00:10:45.789 [2024-11-17 08:10:50.579597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.789 [2024-11-17 08:10:50.717140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.048 [2024-11-17 08:10:50.962410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.309 Running I/O for 10 seconds... 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:10:46.309 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:46.568 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:46.568 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:46.568 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:46.568 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:46.568 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.568 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:46.829 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.829 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:10:46.829 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:10:46.829 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:46.829 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:46.829 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:46.829 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:46.829 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.829 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:46.829 [2024-11-17 08:10:51.622364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.829 [2024-11-17 08:10:51.622426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.829 [2024-11-17 08:10:51.622473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.622975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.622987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.830 [2024-11-17 08:10:51.623396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.830 [2024-11-17 08:10:51.623408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.623982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.623993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.624006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.624017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.624030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.624041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.624054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.624065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.624078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.624089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.624102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.624113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.624125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:46.831 [2024-11-17 08:10:51.624136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 task offset: 87040 on job bdev=Nvme0n1 fails 00:10:46.831 00:10:46.831 Latency(us) 00:10:46.831 [2024-11-17T08:10:51.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.831 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:46.831 Job: Nvme0n1 ended in about 0.43 seconds with error 00:10:46.831 Verification LBA range: start 0x0 length 0x400 00:10:46.831 Nvme0n1 : 0.43 1498.06 93.63 149.81 0.00 37826.05 3092.56 36215.88 00:10:46.831 [2024-11-17T08:10:51.843Z] =================================================================================================================== 00:10:46.831 [2024-11-17T08:10:51.843Z] Total : 1498.06 93.63 149.81 0.00 37826.05 3092.56 36215.88 00:10:46.831 [2024-11-17 08:10:51.624693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.831 [2024-11-17 08:10:51.624729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.624746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.831 [2024-11-17 08:10:51.624758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.624771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.831 [2024-11-17 08:10:51.624783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.624797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:46.831 [2024-11-17 08:10:51.624809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.831 [2024-11-17 08:10:51.624821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:10:46.831 [2024-11-17 08:10:51.625831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:46.831 [2024-11-17 08:10:51.630896] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:46.831 [2024-11-17 08:10:51.630934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:10:46.831 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.832 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:46.832 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.832 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:46.832 [2024-11-17 08:10:51.637687] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:10:46.832 [2024-11-17 08:10:51.637815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:10:46.832 [2024-11-17 08:10:51.637848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.832 [2024-11-17 08:10:51.637867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:10:46.832 [2024-11-17 08:10:51.637882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:10:46.832 [2024-11-17 08:10:51.637897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:10:46.832 [2024-11-17 08:10:51.637910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500002ad80 00:10:46.832 [2024-11-17 08:10:51.637950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:10:46.832 [2024-11-17 08:10:51.637970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:10:46.832 [2024-11-17 08:10:51.637988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:10:46.832 [2024-11-17 08:10:51.638003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:10:46.832 [2024-11-17 08:10:51.638020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:10:46.832 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.832 08:10:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65445 00:10:47.772 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65445) - No such process 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:47.772 { 00:10:47.772 "params": { 00:10:47.772 "name": "Nvme$subsystem", 00:10:47.772 "trtype": "$TEST_TRANSPORT", 00:10:47.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:47.772 "adrfam": "ipv4", 00:10:47.772 "trsvcid": "$NVMF_PORT", 00:10:47.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:47.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:47.772 "hdgst": ${hdgst:-false}, 00:10:47.772 "ddgst": ${ddgst:-false} 00:10:47.772 }, 00:10:47.772 "method": "bdev_nvme_attach_controller" 00:10:47.772 } 00:10:47.772 EOF 00:10:47.772 )") 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:47.772 08:10:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:47.772 "params": { 00:10:47.772 "name": "Nvme0", 00:10:47.772 "trtype": "tcp", 00:10:47.772 "traddr": "10.0.0.3", 00:10:47.772 "adrfam": "ipv4", 00:10:47.772 "trsvcid": "4420", 00:10:47.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:47.772 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:47.772 "hdgst": false, 00:10:47.772 "ddgst": false 00:10:47.772 }, 00:10:47.772 "method": "bdev_nvme_attach_controller" 00:10:47.772 }' 00:10:47.772 [2024-11-17 08:10:52.761997] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:47.772 [2024-11-17 08:10:52.762310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65488 ] 00:10:48.032 [2024-11-17 08:10:52.950010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.291 [2024-11-17 08:10:53.089706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.551 [2024-11-17 08:10:53.343579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.811 Running I/O for 1 seconds... 00:10:49.750 1600.00 IOPS, 100.00 MiB/s 00:10:49.750 Latency(us) 00:10:49.750 [2024-11-17T08:10:54.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.750 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:49.750 Verification LBA range: start 0x0 length 0x400 00:10:49.750 Nvme0n1 : 1.03 1619.64 101.23 0.00 0.00 38916.50 4816.50 36005.32 00:10:49.750 [2024-11-17T08:10:54.762Z] =================================================================================================================== 00:10:49.750 [2024-11-17T08:10:54.762Z] Total : 1619.64 101.23 0.00 0.00 38916.50 4816.50 36005.32 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.131 rmmod nvme_tcp 00:10:51.131 rmmod nvme_fabrics 00:10:51.131 rmmod nvme_keyring 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65385 ']' 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65385 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 65385 ']' 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 65385 00:10:51.131 08:10:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:10:51.131 08:10:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.131 08:10:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65385 00:10:51.131 killing process with pid 65385 00:10:51.131 08:10:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:51.131 08:10:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:51.131 08:10:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65385' 00:10:51.131 08:10:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 65385 00:10:51.131 08:10:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 65385 00:10:52.511 [2024-11-17 08:10:57.232595] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:52.511 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:52.771 00:10:52.771 real 0m9.590s 00:10:52.771 user 0m35.095s 00:10:52.771 sys 0m2.599s 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:52.771 ************************************ 00:10:52.771 END TEST nvmf_host_management 00:10:52.771 ************************************ 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.771 ************************************ 00:10:52.771 START TEST nvmf_lvol 00:10:52.771 ************************************ 00:10:52.771 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:53.032 * Looking for test storage... 00:10:53.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.032 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:53.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.033 --rc genhtml_branch_coverage=1 00:10:53.033 --rc genhtml_function_coverage=1 00:10:53.033 --rc genhtml_legend=1 00:10:53.033 --rc geninfo_all_blocks=1 00:10:53.033 --rc geninfo_unexecuted_blocks=1 00:10:53.033 00:10:53.033 ' 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:53.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.033 --rc genhtml_branch_coverage=1 00:10:53.033 --rc genhtml_function_coverage=1 00:10:53.033 --rc genhtml_legend=1 00:10:53.033 --rc geninfo_all_blocks=1 00:10:53.033 --rc geninfo_unexecuted_blocks=1 00:10:53.033 00:10:53.033 ' 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:53.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.033 --rc genhtml_branch_coverage=1 00:10:53.033 --rc genhtml_function_coverage=1 00:10:53.033 --rc genhtml_legend=1 00:10:53.033 --rc geninfo_all_blocks=1 00:10:53.033 --rc geninfo_unexecuted_blocks=1 00:10:53.033 00:10:53.033 ' 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:53.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.033 --rc genhtml_branch_coverage=1 00:10:53.033 --rc genhtml_function_coverage=1 00:10:53.033 --rc genhtml_legend=1 00:10:53.033 --rc geninfo_all_blocks=1 00:10:53.033 --rc geninfo_unexecuted_blocks=1 00:10:53.033 00:10:53.033 ' 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.033 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.033 08:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:53.033 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:53.033 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:53.033 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:53.033 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:53.033 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:53.033 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.033 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.033 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.033 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:53.034 Cannot find device "nvmf_init_br" 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:53.034 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:53.294 Cannot find device "nvmf_init_br2" 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:53.294 Cannot find device "nvmf_tgt_br" 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:53.294 Cannot find device "nvmf_tgt_br2" 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:53.294 Cannot find device "nvmf_init_br" 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:53.294 Cannot find device "nvmf_init_br2" 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:53.294 Cannot find device "nvmf_tgt_br" 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:53.294 Cannot find device "nvmf_tgt_br2" 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:53.294 Cannot find device "nvmf_br" 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:53.294 Cannot find device "nvmf_init_if" 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:53.294 Cannot find device "nvmf_init_if2" 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:53.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:53.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:53.294 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:53.555 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:53.555 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:10:53.555 00:10:53.555 --- 10.0.0.3 ping statistics --- 00:10:53.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.555 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:53.555 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:53.555 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:10:53.555 00:10:53.555 --- 10.0.0.4 ping statistics --- 00:10:53.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.555 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:53.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:10:53.555 00:10:53.555 --- 10.0.0.1 ping statistics --- 00:10:53.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.555 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:53.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:10:53.555 00:10:53.555 --- 10.0.0.2 ping statistics --- 00:10:53.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.555 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.555 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:53.556 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:53.556 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.556 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:53.556 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65795 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65795 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65795 ']' 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.816 08:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:53.816 [2024-11-17 08:10:58.716958] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:53.816 [2024-11-17 08:10:58.717065] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.076 [2024-11-17 08:10:58.906444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:54.076 [2024-11-17 08:10:59.047883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.076 [2024-11-17 08:10:59.047938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.076 [2024-11-17 08:10:59.047955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.076 [2024-11-17 08:10:59.047967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.076 [2024-11-17 08:10:59.047981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.076 [2024-11-17 08:10:59.050433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.076 [2024-11-17 08:10:59.050584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.076 [2024-11-17 08:10:59.050612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.334 [2024-11-17 08:10:59.290118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.592 08:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.592 08:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:10:54.592 08:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:54.592 08:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:54.592 08:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:54.592 08:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.592 08:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:54.850 [2024-11-17 08:10:59.754567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.850 08:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.109 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:55.109 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.678 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:55.678 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:55.678 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:55.937 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3e6d5de4-dac4-45e1-8e76-1a6b2798fa04 00:10:55.938 08:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3e6d5de4-dac4-45e1-8e76-1a6b2798fa04 lvol 20 00:10:56.197 08:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a1d55881-c3ce-4405-9a12-07af391bcdbc 00:10:56.197 08:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:56.456 08:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a1d55881-c3ce-4405-9a12-07af391bcdbc 00:10:56.456 08:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:56.715 [2024-11-17 08:11:01.631216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:56.715 08:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:56.975 08:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:56.975 08:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65865 00:10:56.975 08:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:57.912 08:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot a1d55881-c3ce-4405-9a12-07af391bcdbc MY_SNAPSHOT 00:10:58.172 08:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=79ae59ee-ece7-4f5b-a985-8403d9cefdb0 00:10:58.172 08:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize a1d55881-c3ce-4405-9a12-07af391bcdbc 30 00:10:58.431 08:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 79ae59ee-ece7-4f5b-a985-8403d9cefdb0 MY_CLONE 00:10:58.691 08:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4ff22dfe-9938-4f9e-9405-f3e333bc3957 00:10:58.691 08:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 4ff22dfe-9938-4f9e-9405-f3e333bc3957 00:10:59.260 08:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65865 00:11:07.386 Initializing NVMe Controllers 00:11:07.386 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:11:07.386 Controller IO queue size 128, less than required. 00:11:07.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:07.386 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:07.386 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:07.386 Initialization complete. Launching workers. 00:11:07.386 ======================================================== 00:11:07.386 Latency(us) 00:11:07.386 Device Information : IOPS MiB/s Average min max 00:11:07.386 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 6319.60 24.69 20261.65 334.07 176967.76 00:11:07.386 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7395.20 28.89 17318.74 4139.20 200288.43 00:11:07.386 ======================================================== 00:11:07.386 Total : 13714.80 53.57 18674.79 334.07 200288.43 00:11:07.386 00:11:07.386 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:07.645 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a1d55881-c3ce-4405-9a12-07af391bcdbc 00:11:07.904 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e6d5de4-dac4-45e1-8e76-1a6b2798fa04 00:11:08.163 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:08.163 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:08.163 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:08.163 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.163 08:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.163 rmmod nvme_tcp 00:11:08.163 rmmod nvme_fabrics 00:11:08.163 rmmod nvme_keyring 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65795 ']' 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65795 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65795 ']' 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65795 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65795 00:11:08.163 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.164 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.164 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65795' 00:11:08.164 killing process with pid 65795 00:11:08.164 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65795 00:11:08.164 08:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65795 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.070 08:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.070 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:11:10.070 00:11:10.070 real 0m17.313s 00:11:10.070 user 1m5.825s 00:11:10.071 sys 0m5.759s 00:11:10.071 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.071 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:10.071 ************************************ 00:11:10.071 END TEST nvmf_lvol 00:11:10.071 ************************************ 00:11:10.330 08:11:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:10.330 08:11:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:10.330 08:11:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.330 08:11:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:10.330 ************************************ 00:11:10.330 START TEST nvmf_lvs_grow 00:11:10.330 ************************************ 00:11:10.330 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:10.330 * Looking for test storage... 00:11:10.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:10.330 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:10.330 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:11:10.330 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:10.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.590 --rc genhtml_branch_coverage=1 00:11:10.590 --rc genhtml_function_coverage=1 00:11:10.590 --rc genhtml_legend=1 00:11:10.590 --rc geninfo_all_blocks=1 00:11:10.590 --rc geninfo_unexecuted_blocks=1 00:11:10.590 00:11:10.590 ' 00:11:10.590 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:10.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.590 --rc genhtml_branch_coverage=1 00:11:10.591 --rc genhtml_function_coverage=1 00:11:10.591 --rc genhtml_legend=1 00:11:10.591 --rc geninfo_all_blocks=1 00:11:10.591 --rc geninfo_unexecuted_blocks=1 00:11:10.591 00:11:10.591 ' 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:10.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.591 --rc genhtml_branch_coverage=1 00:11:10.591 --rc genhtml_function_coverage=1 00:11:10.591 --rc genhtml_legend=1 00:11:10.591 --rc geninfo_all_blocks=1 00:11:10.591 --rc geninfo_unexecuted_blocks=1 00:11:10.591 00:11:10.591 ' 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:10.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.591 --rc genhtml_branch_coverage=1 00:11:10.591 --rc genhtml_function_coverage=1 00:11:10.591 --rc genhtml_legend=1 00:11:10.591 --rc geninfo_all_blocks=1 00:11:10.591 --rc geninfo_unexecuted_blocks=1 00:11:10.591 00:11:10.591 ' 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.591 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:10.591 Cannot find device "nvmf_init_br" 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:10.591 Cannot find device "nvmf_init_br2" 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:10.591 Cannot find device "nvmf_tgt_br" 00:11:10.591 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:11:10.592 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:10.592 Cannot find device "nvmf_tgt_br2" 00:11:10.592 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:11:10.592 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:10.592 Cannot find device "nvmf_init_br" 00:11:10.592 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:11:10.592 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:10.592 Cannot find device "nvmf_init_br2" 00:11:10.592 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:11:10.592 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:10.592 Cannot find device "nvmf_tgt_br" 00:11:10.592 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:11:10.592 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:10.592 Cannot find device "nvmf_tgt_br2" 00:11:10.592 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:11:10.592 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:10.851 Cannot find device "nvmf_br" 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:10.851 Cannot find device "nvmf_init_if" 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:10.851 Cannot find device "nvmf_init_if2" 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:10.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:10.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:10.851 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:10.852 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:10.852 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:10.852 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:10.852 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:10.852 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:11.111 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:11.111 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:11.111 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:11.111 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:11.111 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:11.111 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:11.111 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:11.111 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:11.111 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:11.111 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:11:11.111 00:11:11.111 --- 10.0.0.3 ping statistics --- 00:11:11.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.111 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:11:11.111 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:11.111 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:11.111 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:11:11.111 00:11:11.111 --- 10.0.0.4 ping statistics --- 00:11:11.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.112 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:11:11.112 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:11.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:11.112 00:11:11.112 --- 10.0.0.1 ping statistics --- 00:11:11.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.112 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:11.112 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:11.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:11:11.112 00:11:11.112 --- 10.0.0.2 ping statistics --- 00:11:11.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.112 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:11:11.112 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.112 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:11:11.112 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:11.112 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.112 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:11.112 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:11.112 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.112 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:11.112 08:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66263 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66263 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 66263 ']' 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.112 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:11.112 [2024-11-17 08:11:16.120850] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:11.112 [2024-11-17 08:11:16.120969] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.371 [2024-11-17 08:11:16.307286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.644 [2024-11-17 08:11:16.445123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.644 [2024-11-17 08:11:16.445177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.644 [2024-11-17 08:11:16.445195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.644 [2024-11-17 08:11:16.445232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.644 [2024-11-17 08:11:16.445245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.644 [2024-11-17 08:11:16.446664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.941 [2024-11-17 08:11:16.698474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:11.941 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.941 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:11:11.941 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.941 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.941 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:12.211 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.211 08:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:12.211 [2024-11-17 08:11:17.166622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:12.211 ************************************ 00:11:12.211 START TEST lvs_grow_clean 00:11:12.211 ************************************ 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:12.211 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:12.471 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:12.471 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:12.730 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ba30097e-df55-4651-8145-b8f7cb69ac56 00:11:12.730 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:12.730 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba30097e-df55-4651-8145-b8f7cb69ac56 00:11:12.988 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:12.988 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:12.988 08:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ba30097e-df55-4651-8145-b8f7cb69ac56 lvol 150 00:11:13.246 08:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a78681f0-39d9-47e9-a44f-deae5d94b213 00:11:13.247 08:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:13.247 08:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:13.506 [2024-11-17 08:11:18.315048] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:13.506 [2024-11-17 08:11:18.315177] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:13.506 true 00:11:13.506 08:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba30097e-df55-4651-8145-b8f7cb69ac56 00:11:13.506 08:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:13.764 08:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:13.764 08:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:13.764 08:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a78681f0-39d9-47e9-a44f-deae5d94b213 00:11:14.024 08:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:14.283 [2024-11-17 08:11:19.170374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:14.283 08:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:14.542 08:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66348 00:11:14.542 08:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:14.542 08:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:14.542 08:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66348 /var/tmp/bdevperf.sock 00:11:14.542 08:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66348 ']' 00:11:14.542 08:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:14.542 08:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.542 08:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:14.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:14.542 08:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.542 08:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:14.542 [2024-11-17 08:11:19.490423] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:14.542 [2024-11-17 08:11:19.490758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66348 ] 00:11:14.801 [2024-11-17 08:11:19.674402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.801 [2024-11-17 08:11:19.783233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.060 [2024-11-17 08:11:19.976656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:15.320 08:11:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.320 08:11:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:11:15.320 08:11:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:15.579 Nvme0n1 00:11:15.579 08:11:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:15.838 [ 00:11:15.838 { 00:11:15.838 "name": "Nvme0n1", 00:11:15.838 "aliases": [ 00:11:15.838 "a78681f0-39d9-47e9-a44f-deae5d94b213" 00:11:15.839 ], 00:11:15.839 "product_name": "NVMe disk", 00:11:15.839 "block_size": 4096, 00:11:15.839 "num_blocks": 38912, 00:11:15.839 "uuid": "a78681f0-39d9-47e9-a44f-deae5d94b213", 00:11:15.839 "numa_id": -1, 00:11:15.839 "assigned_rate_limits": { 00:11:15.839 "rw_ios_per_sec": 0, 00:11:15.839 "rw_mbytes_per_sec": 0, 00:11:15.839 "r_mbytes_per_sec": 0, 00:11:15.839 "w_mbytes_per_sec": 0 00:11:15.839 }, 00:11:15.839 "claimed": false, 00:11:15.839 "zoned": false, 00:11:15.839 "supported_io_types": { 00:11:15.839 "read": true, 00:11:15.839 "write": true, 00:11:15.839 "unmap": true, 00:11:15.839 "flush": true, 00:11:15.839 "reset": true, 00:11:15.839 "nvme_admin": true, 00:11:15.839 "nvme_io": true, 00:11:15.839 "nvme_io_md": false, 00:11:15.839 "write_zeroes": true, 00:11:15.839 "zcopy": false, 00:11:15.839 "get_zone_info": false, 00:11:15.839 "zone_management": false, 00:11:15.839 "zone_append": false, 00:11:15.839 "compare": true, 00:11:15.839 "compare_and_write": true, 00:11:15.839 "abort": true, 00:11:15.839 "seek_hole": false, 00:11:15.839 "seek_data": false, 00:11:15.839 "copy": true, 00:11:15.839 "nvme_iov_md": false 00:11:15.839 }, 00:11:15.839 "memory_domains": [ 00:11:15.839 { 00:11:15.839 "dma_device_id": "system", 00:11:15.839 "dma_device_type": 1 00:11:15.839 } 00:11:15.839 ], 00:11:15.839 "driver_specific": { 00:11:15.839 "nvme": [ 00:11:15.839 { 00:11:15.839 "trid": { 00:11:15.839 "trtype": "TCP", 00:11:15.839 "adrfam": "IPv4", 00:11:15.839 "traddr": "10.0.0.3", 00:11:15.839 "trsvcid": "4420", 00:11:15.839 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:15.839 }, 00:11:15.839 "ctrlr_data": { 00:11:15.839 "cntlid": 1, 00:11:15.839 "vendor_id": "0x8086", 00:11:15.839 "model_number": "SPDK bdev Controller", 00:11:15.839 "serial_number": "SPDK0", 00:11:15.839 "firmware_revision": "25.01", 00:11:15.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:15.839 "oacs": { 00:11:15.839 "security": 0, 00:11:15.839 "format": 0, 00:11:15.839 "firmware": 0, 00:11:15.839 "ns_manage": 0 00:11:15.839 }, 00:11:15.839 "multi_ctrlr": true, 00:11:15.839 "ana_reporting": false 00:11:15.839 }, 00:11:15.839 "vs": { 00:11:15.839 "nvme_version": "1.3" 00:11:15.839 }, 00:11:15.839 "ns_data": { 00:11:15.839 "id": 1, 00:11:15.839 "can_share": true 00:11:15.839 } 00:11:15.839 } 00:11:15.839 ], 00:11:15.839 "mp_policy": "active_passive" 00:11:15.839 } 00:11:15.839 } 00:11:15.839 ] 00:11:15.839 08:11:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:15.839 08:11:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66366 00:11:15.839 08:11:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:15.839 Running I/O for 10 seconds... 00:11:17.217 Latency(us) 00:11:17.217 [2024-11-17T08:11:22.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.217 Nvme0n1 : 1.00 9144.00 35.72 0.00 0.00 0.00 0.00 0.00 00:11:17.217 [2024-11-17T08:11:22.229Z] =================================================================================================================== 00:11:17.217 [2024-11-17T08:11:22.229Z] Total : 9144.00 35.72 0.00 0.00 0.00 0.00 0.00 00:11:17.217 00:11:17.786 08:11:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ba30097e-df55-4651-8145-b8f7cb69ac56 00:11:17.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.786 Nvme0n1 : 2.00 9098.00 35.54 0.00 0.00 0.00 0.00 0.00 00:11:17.786 [2024-11-17T08:11:22.798Z] =================================================================================================================== 00:11:17.786 [2024-11-17T08:11:22.798Z] Total : 9098.00 35.54 0.00 0.00 0.00 0.00 0.00 00:11:17.786 00:11:18.045 true 00:11:18.045 08:11:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba30097e-df55-4651-8145-b8f7cb69ac56 00:11:18.045 08:11:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:18.304 08:11:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:18.304 08:11:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:18.304 08:11:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66366 00:11:18.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.873 Nvme0n1 : 3.00 9028.67 35.27 0.00 0.00 0.00 0.00 0.00 00:11:18.873 [2024-11-17T08:11:23.885Z] =================================================================================================================== 00:11:18.873 [2024-11-17T08:11:23.885Z] Total : 9028.67 35.27 0.00 0.00 0.00 0.00 0.00 00:11:18.873 00:11:19.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.809 Nvme0n1 : 4.00 8994.00 35.13 0.00 0.00 0.00 0.00 0.00 00:11:19.809 [2024-11-17T08:11:24.821Z] =================================================================================================================== 00:11:19.809 [2024-11-17T08:11:24.821Z] Total : 8994.00 35.13 0.00 0.00 0.00 0.00 0.00 00:11:19.809 00:11:21.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:21.195 Nvme0n1 : 5.00 8935.20 34.90 0.00 0.00 0.00 0.00 0.00 00:11:21.195 [2024-11-17T08:11:26.207Z] =================================================================================================================== 00:11:21.195 [2024-11-17T08:11:26.207Z] Total : 8935.20 34.90 0.00 0.00 0.00 0.00 0.00 00:11:21.195 00:11:22.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.130 Nvme0n1 : 6.00 8864.17 34.63 0.00 0.00 0.00 0.00 0.00 00:11:22.130 [2024-11-17T08:11:27.142Z] =================================================================================================================== 00:11:22.130 [2024-11-17T08:11:27.142Z] Total : 8864.17 34.63 0.00 0.00 0.00 0.00 0.00 00:11:22.130 00:11:23.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.065 Nvme0n1 : 7.00 8829.86 34.49 0.00 0.00 0.00 0.00 0.00 00:11:23.065 [2024-11-17T08:11:28.077Z] =================================================================================================================== 00:11:23.065 [2024-11-17T08:11:28.077Z] Total : 8829.86 34.49 0.00 0.00 0.00 0.00 0.00 00:11:23.065 00:11:24.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.003 Nvme0n1 : 8.00 8789.75 34.33 0.00 0.00 0.00 0.00 0.00 00:11:24.003 [2024-11-17T08:11:29.015Z] =================================================================================================================== 00:11:24.003 [2024-11-17T08:11:29.015Z] Total : 8789.75 34.33 0.00 0.00 0.00 0.00 0.00 00:11:24.003 00:11:24.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.941 Nvme0n1 : 9.00 8772.67 34.27 0.00 0.00 0.00 0.00 0.00 00:11:24.941 [2024-11-17T08:11:29.953Z] =================================================================================================================== 00:11:24.941 [2024-11-17T08:11:29.953Z] Total : 8772.67 34.27 0.00 0.00 0.00 0.00 0.00 00:11:24.941 00:11:25.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.880 Nvme0n1 : 10.00 8746.30 34.17 0.00 0.00 0.00 0.00 0.00 00:11:25.880 [2024-11-17T08:11:30.892Z] =================================================================================================================== 00:11:25.880 [2024-11-17T08:11:30.892Z] Total : 8746.30 34.17 0.00 0.00 0.00 0.00 0.00 00:11:25.880 00:11:25.880 00:11:25.880 Latency(us) 00:11:25.880 [2024-11-17T08:11:30.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.880 Nvme0n1 : 10.01 8746.34 34.17 0.00 0.00 14629.75 7422.15 32425.84 00:11:25.880 [2024-11-17T08:11:30.892Z] =================================================================================================================== 00:11:25.880 [2024-11-17T08:11:30.892Z] Total : 8746.34 34.17 0.00 0.00 14629.75 7422.15 32425.84 00:11:25.880 { 00:11:25.880 "results": [ 00:11:25.880 { 00:11:25.880 "job": "Nvme0n1", 00:11:25.880 "core_mask": "0x2", 00:11:25.880 "workload": "randwrite", 00:11:25.880 "status": "finished", 00:11:25.880 "queue_depth": 128, 00:11:25.880 "io_size": 4096, 00:11:25.880 "runtime": 10.014591, 00:11:25.880 "iops": 8746.338217906254, 00:11:25.880 "mibps": 34.1653836636963, 00:11:25.880 "io_failed": 0, 00:11:25.880 "io_timeout": 0, 00:11:25.880 "avg_latency_us": 14629.751841717432, 00:11:25.880 "min_latency_us": 7422.149397590361, 00:11:25.880 "max_latency_us": 32425.84417670683 00:11:25.880 } 00:11:25.880 ], 00:11:25.880 "core_count": 1 00:11:25.880 } 00:11:25.880 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66348 00:11:25.880 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66348 ']' 00:11:25.880 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66348 00:11:25.880 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:11:25.880 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.880 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66348 00:11:25.880 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:25.880 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:25.880 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66348' 00:11:25.880 killing process with pid 66348 00:11:25.880 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66348 00:11:25.880 Received shutdown signal, test time was about 10.000000 seconds 00:11:25.880 00:11:25.880 Latency(us) 00:11:25.880 [2024-11-17T08:11:30.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.880 [2024-11-17T08:11:30.892Z] =================================================================================================================== 00:11:25.880 [2024-11-17T08:11:30.892Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:25.880 08:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66348 00:11:26.818 08:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:27.078 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:27.337 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:27.337 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba30097e-df55-4651-8145-b8f7cb69ac56 00:11:27.596 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:27.596 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:27.596 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:27.855 [2024-11-17 08:11:32.636957] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:27.855 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba30097e-df55-4651-8145-b8f7cb69ac56 00:11:27.855 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:11:27.855 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba30097e-df55-4651-8145-b8f7cb69ac56 00:11:27.856 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.856 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:27.856 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.856 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:27.856 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.856 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:27.856 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.856 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:27.856 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba30097e-df55-4651-8145-b8f7cb69ac56 00:11:28.115 request: 00:11:28.115 { 00:11:28.115 "uuid": "ba30097e-df55-4651-8145-b8f7cb69ac56", 00:11:28.115 "method": "bdev_lvol_get_lvstores", 00:11:28.115 "req_id": 1 00:11:28.115 } 00:11:28.115 Got JSON-RPC error response 00:11:28.115 response: 00:11:28.115 { 00:11:28.115 "code": -19, 00:11:28.115 "message": "No such device" 00:11:28.115 } 00:11:28.115 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:11:28.115 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:28.115 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:28.115 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:28.115 08:11:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:28.115 aio_bdev 00:11:28.374 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a78681f0-39d9-47e9-a44f-deae5d94b213 00:11:28.374 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a78681f0-39d9-47e9-a44f-deae5d94b213 00:11:28.374 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.374 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:11:28.374 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.374 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.374 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:28.374 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a78681f0-39d9-47e9-a44f-deae5d94b213 -t 2000 00:11:28.634 [ 00:11:28.634 { 00:11:28.634 "name": "a78681f0-39d9-47e9-a44f-deae5d94b213", 00:11:28.634 "aliases": [ 00:11:28.634 "lvs/lvol" 00:11:28.634 ], 00:11:28.634 "product_name": "Logical Volume", 00:11:28.634 "block_size": 4096, 00:11:28.634 "num_blocks": 38912, 00:11:28.634 "uuid": "a78681f0-39d9-47e9-a44f-deae5d94b213", 00:11:28.634 "assigned_rate_limits": { 00:11:28.634 "rw_ios_per_sec": 0, 00:11:28.634 "rw_mbytes_per_sec": 0, 00:11:28.634 "r_mbytes_per_sec": 0, 00:11:28.634 "w_mbytes_per_sec": 0 00:11:28.634 }, 00:11:28.634 "claimed": false, 00:11:28.634 "zoned": false, 00:11:28.634 "supported_io_types": { 00:11:28.634 "read": true, 00:11:28.634 "write": true, 00:11:28.634 "unmap": true, 00:11:28.634 "flush": false, 00:11:28.634 "reset": true, 00:11:28.634 "nvme_admin": false, 00:11:28.634 "nvme_io": false, 00:11:28.634 "nvme_io_md": false, 00:11:28.634 "write_zeroes": true, 00:11:28.634 "zcopy": false, 00:11:28.634 "get_zone_info": false, 00:11:28.634 "zone_management": false, 00:11:28.634 "zone_append": false, 00:11:28.634 "compare": false, 00:11:28.634 "compare_and_write": false, 00:11:28.634 "abort": false, 00:11:28.634 "seek_hole": true, 00:11:28.634 "seek_data": true, 00:11:28.634 "copy": false, 00:11:28.634 "nvme_iov_md": false 00:11:28.634 }, 00:11:28.634 "driver_specific": { 00:11:28.634 "lvol": { 00:11:28.634 "lvol_store_uuid": "ba30097e-df55-4651-8145-b8f7cb69ac56", 00:11:28.634 "base_bdev": "aio_bdev", 00:11:28.634 "thin_provision": false, 00:11:28.634 "num_allocated_clusters": 38, 00:11:28.634 "snapshot": false, 00:11:28.634 "clone": false, 00:11:28.634 "esnap_clone": false 00:11:28.634 } 00:11:28.634 } 00:11:28.634 } 00:11:28.634 ] 00:11:28.634 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:11:28.634 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba30097e-df55-4651-8145-b8f7cb69ac56 00:11:28.634 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:28.893 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:28.893 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba30097e-df55-4651-8145-b8f7cb69ac56 00:11:28.893 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:29.152 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:29.152 08:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a78681f0-39d9-47e9-a44f-deae5d94b213 00:11:29.411 08:11:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba30097e-df55-4651-8145-b8f7cb69ac56 00:11:29.670 08:11:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:29.670 08:11:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:30.237 ************************************ 00:11:30.237 END TEST lvs_grow_clean 00:11:30.237 ************************************ 00:11:30.237 00:11:30.237 real 0m17.886s 00:11:30.237 user 0m15.670s 00:11:30.237 sys 0m3.241s 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:30.237 ************************************ 00:11:30.237 START TEST lvs_grow_dirty 00:11:30.237 ************************************ 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:30.237 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:30.496 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:30.496 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:30.755 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:30.755 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:30.755 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:31.015 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:31.015 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:31.015 08:11:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 lvol 150 00:11:31.015 08:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=60da18a7-d912-4a04-b35c-e1fea041f5ab 00:11:31.015 08:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:31.015 08:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:31.273 [2024-11-17 08:11:36.198018] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:31.273 [2024-11-17 08:11:36.198115] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:31.273 true 00:11:31.273 08:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:31.274 08:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:31.532 08:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:31.532 08:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:31.791 08:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 60da18a7-d912-4a04-b35c-e1fea041f5ab 00:11:32.050 08:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:32.050 [2024-11-17 08:11:37.042081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:32.310 08:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:32.310 08:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66614 00:11:32.310 08:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:32.310 08:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:32.310 08:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66614 /var/tmp/bdevperf.sock 00:11:32.310 08:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66614 ']' 00:11:32.310 08:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:32.310 08:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:32.310 08:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:32.310 08:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.310 08:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:32.692 [2024-11-17 08:11:37.355594] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:32.692 [2024-11-17 08:11:37.355955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66614 ] 00:11:32.692 [2024-11-17 08:11:37.540747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.692 [2024-11-17 08:11:37.646187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.999 [2024-11-17 08:11:37.856019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:33.259 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.259 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:33.259 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:33.519 Nvme0n1 00:11:33.519 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:33.778 [ 00:11:33.778 { 00:11:33.778 "name": "Nvme0n1", 00:11:33.778 "aliases": [ 00:11:33.778 "60da18a7-d912-4a04-b35c-e1fea041f5ab" 00:11:33.778 ], 00:11:33.778 "product_name": "NVMe disk", 00:11:33.778 "block_size": 4096, 00:11:33.778 "num_blocks": 38912, 00:11:33.778 "uuid": "60da18a7-d912-4a04-b35c-e1fea041f5ab", 00:11:33.778 "numa_id": -1, 00:11:33.778 "assigned_rate_limits": { 00:11:33.778 "rw_ios_per_sec": 0, 00:11:33.778 "rw_mbytes_per_sec": 0, 00:11:33.778 "r_mbytes_per_sec": 0, 00:11:33.778 "w_mbytes_per_sec": 0 00:11:33.778 }, 00:11:33.778 "claimed": false, 00:11:33.778 "zoned": false, 00:11:33.778 "supported_io_types": { 00:11:33.778 "read": true, 00:11:33.778 "write": true, 00:11:33.778 "unmap": true, 00:11:33.778 "flush": true, 00:11:33.778 "reset": true, 00:11:33.778 "nvme_admin": true, 00:11:33.778 "nvme_io": true, 00:11:33.778 "nvme_io_md": false, 00:11:33.778 "write_zeroes": true, 00:11:33.778 "zcopy": false, 00:11:33.778 "get_zone_info": false, 00:11:33.778 "zone_management": false, 00:11:33.778 "zone_append": false, 00:11:33.778 "compare": true, 00:11:33.778 "compare_and_write": true, 00:11:33.778 "abort": true, 00:11:33.778 "seek_hole": false, 00:11:33.778 "seek_data": false, 00:11:33.778 "copy": true, 00:11:33.778 "nvme_iov_md": false 00:11:33.778 }, 00:11:33.778 "memory_domains": [ 00:11:33.778 { 00:11:33.778 "dma_device_id": "system", 00:11:33.778 "dma_device_type": 1 00:11:33.778 } 00:11:33.778 ], 00:11:33.778 "driver_specific": { 00:11:33.778 "nvme": [ 00:11:33.778 { 00:11:33.778 "trid": { 00:11:33.778 "trtype": "TCP", 00:11:33.778 "adrfam": "IPv4", 00:11:33.779 "traddr": "10.0.0.3", 00:11:33.779 "trsvcid": "4420", 00:11:33.779 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:33.779 }, 00:11:33.779 "ctrlr_data": { 00:11:33.779 "cntlid": 1, 00:11:33.779 "vendor_id": "0x8086", 00:11:33.779 "model_number": "SPDK bdev Controller", 00:11:33.779 "serial_number": "SPDK0", 00:11:33.779 "firmware_revision": "25.01", 00:11:33.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:33.779 "oacs": { 00:11:33.779 "security": 0, 00:11:33.779 "format": 0, 00:11:33.779 "firmware": 0, 00:11:33.779 "ns_manage": 0 00:11:33.779 }, 00:11:33.779 "multi_ctrlr": true, 00:11:33.779 "ana_reporting": false 00:11:33.779 }, 00:11:33.779 "vs": { 00:11:33.779 "nvme_version": "1.3" 00:11:33.779 }, 00:11:33.779 "ns_data": { 00:11:33.779 "id": 1, 00:11:33.779 "can_share": true 00:11:33.779 } 00:11:33.779 } 00:11:33.779 ], 00:11:33.779 "mp_policy": "active_passive" 00:11:33.779 } 00:11:33.779 } 00:11:33.779 ] 00:11:33.779 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66637 00:11:33.779 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:33.779 08:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:33.779 Running I/O for 10 seconds... 00:11:34.725 Latency(us) 00:11:34.725 [2024-11-17T08:11:39.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.725 Nvme0n1 : 1.00 8636.00 33.73 0.00 0.00 0.00 0.00 0.00 00:11:34.725 [2024-11-17T08:11:39.737Z] =================================================================================================================== 00:11:34.725 [2024-11-17T08:11:39.737Z] Total : 8636.00 33.73 0.00 0.00 0.00 0.00 0.00 00:11:34.725 00:11:35.663 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:35.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.922 Nvme0n1 : 2.00 8763.00 34.23 0.00 0.00 0.00 0.00 0.00 00:11:35.922 [2024-11-17T08:11:40.934Z] =================================================================================================================== 00:11:35.922 [2024-11-17T08:11:40.934Z] Total : 8763.00 34.23 0.00 0.00 0.00 0.00 0.00 00:11:35.922 00:11:35.922 true 00:11:35.922 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:35.922 08:11:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:36.182 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:36.182 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:36.182 08:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66637 00:11:36.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.754 Nvme0n1 : 3.00 8847.67 34.56 0.00 0.00 0.00 0.00 0.00 00:11:36.754 [2024-11-17T08:11:41.766Z] =================================================================================================================== 00:11:36.754 [2024-11-17T08:11:41.766Z] Total : 8847.67 34.56 0.00 0.00 0.00 0.00 0.00 00:11:36.754 00:11:37.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.693 Nvme0n1 : 4.00 8858.25 34.60 0.00 0.00 0.00 0.00 0.00 00:11:37.693 [2024-11-17T08:11:42.705Z] =================================================================================================================== 00:11:37.693 [2024-11-17T08:11:42.705Z] Total : 8858.25 34.60 0.00 0.00 0.00 0.00 0.00 00:11:37.693 00:11:39.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.072 Nvme0n1 : 5.00 8839.20 34.53 0.00 0.00 0.00 0.00 0.00 00:11:39.072 [2024-11-17T08:11:44.084Z] =================================================================================================================== 00:11:39.072 [2024-11-17T08:11:44.084Z] Total : 8839.20 34.53 0.00 0.00 0.00 0.00 0.00 00:11:39.073 00:11:40.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.010 Nvme0n1 : 6.00 8826.50 34.48 0.00 0.00 0.00 0.00 0.00 00:11:40.010 [2024-11-17T08:11:45.022Z] =================================================================================================================== 00:11:40.010 [2024-11-17T08:11:45.022Z] Total : 8826.50 34.48 0.00 0.00 0.00 0.00 0.00 00:11:40.010 00:11:40.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.948 Nvme0n1 : 7.00 8597.57 33.58 0.00 0.00 0.00 0.00 0.00 00:11:40.948 [2024-11-17T08:11:45.960Z] =================================================================================================================== 00:11:40.948 [2024-11-17T08:11:45.960Z] Total : 8597.57 33.58 0.00 0.00 0.00 0.00 0.00 00:11:40.948 00:11:41.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.886 Nvme0n1 : 8.00 8570.62 33.48 0.00 0.00 0.00 0.00 0.00 00:11:41.886 [2024-11-17T08:11:46.898Z] =================================================================================================================== 00:11:41.886 [2024-11-17T08:11:46.898Z] Total : 8570.62 33.48 0.00 0.00 0.00 0.00 0.00 00:11:41.886 00:11:42.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.834 Nvme0n1 : 9.00 8563.78 33.45 0.00 0.00 0.00 0.00 0.00 00:11:42.834 [2024-11-17T08:11:47.846Z] =================================================================================================================== 00:11:42.834 [2024-11-17T08:11:47.846Z] Total : 8563.78 33.45 0.00 0.00 0.00 0.00 0.00 00:11:42.834 00:11:43.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:43.773 Nvme0n1 : 10.00 8558.30 33.43 0.00 0.00 0.00 0.00 0.00 00:11:43.773 [2024-11-17T08:11:48.785Z] =================================================================================================================== 00:11:43.773 [2024-11-17T08:11:48.785Z] Total : 8558.30 33.43 0.00 0.00 0.00 0.00 0.00 00:11:43.773 00:11:43.773 00:11:43.773 Latency(us) 00:11:43.773 [2024-11-17T08:11:48.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:43.773 Nvme0n1 : 10.01 8561.14 33.44 0.00 0.00 14948.50 5527.13 180237.16 00:11:43.773 [2024-11-17T08:11:48.785Z] =================================================================================================================== 00:11:43.773 [2024-11-17T08:11:48.785Z] Total : 8561.14 33.44 0.00 0.00 14948.50 5527.13 180237.16 00:11:43.773 { 00:11:43.773 "results": [ 00:11:43.773 { 00:11:43.773 "job": "Nvme0n1", 00:11:43.773 "core_mask": "0x2", 00:11:43.773 "workload": "randwrite", 00:11:43.773 "status": "finished", 00:11:43.773 "queue_depth": 128, 00:11:43.773 "io_size": 4096, 00:11:43.773 "runtime": 10.011634, 00:11:43.773 "iops": 8561.139969759182, 00:11:43.773 "mibps": 33.441953006871806, 00:11:43.773 "io_failed": 0, 00:11:43.773 "io_timeout": 0, 00:11:43.773 "avg_latency_us": 14948.495992983615, 00:11:43.773 "min_latency_us": 5527.1325301204815, 00:11:43.773 "max_latency_us": 180237.15983935742 00:11:43.773 } 00:11:43.773 ], 00:11:43.773 "core_count": 1 00:11:43.773 } 00:11:43.773 08:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66614 00:11:43.773 08:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 66614 ']' 00:11:43.773 08:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 66614 00:11:43.773 08:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:11:43.773 08:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.773 08:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66614 00:11:43.773 killing process with pid 66614 00:11:43.773 Received shutdown signal, test time was about 10.000000 seconds 00:11:43.773 00:11:43.773 Latency(us) 00:11:43.773 [2024-11-17T08:11:48.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.773 [2024-11-17T08:11:48.785Z] =================================================================================================================== 00:11:43.773 [2024-11-17T08:11:48.785Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:43.773 08:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:43.773 08:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:43.773 08:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66614' 00:11:43.773 08:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 66614 00:11:43.773 08:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 66614 00:11:45.155 08:11:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:45.155 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:45.414 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:45.414 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66263 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66263 00:11:45.673 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66263 Killed "${NVMF_APP[@]}" "$@" 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=66777 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 66777 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66777 ']' 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.673 08:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:45.673 [2024-11-17 08:11:50.610247] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:45.673 [2024-11-17 08:11:50.610353] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.933 [2024-11-17 08:11:50.796642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.933 [2024-11-17 08:11:50.925801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.933 [2024-11-17 08:11:50.925861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.933 [2024-11-17 08:11:50.925878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.933 [2024-11-17 08:11:50.925900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.933 [2024-11-17 08:11:50.925914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.933 [2024-11-17 08:11:50.927273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.193 [2024-11-17 08:11:51.168764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:46.453 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.453 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:46.453 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:46.453 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.453 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:46.712 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.712 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:46.712 [2024-11-17 08:11:51.667703] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:46.712 [2024-11-17 08:11:51.668067] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:46.712 [2024-11-17 08:11:51.668352] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:46.712 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:46.712 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 60da18a7-d912-4a04-b35c-e1fea041f5ab 00:11:46.712 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=60da18a7-d912-4a04-b35c-e1fea041f5ab 00:11:46.712 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.712 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:46.712 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.712 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.712 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:46.971 08:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60da18a7-d912-4a04-b35c-e1fea041f5ab -t 2000 00:11:47.231 [ 00:11:47.231 { 00:11:47.231 "name": "60da18a7-d912-4a04-b35c-e1fea041f5ab", 00:11:47.231 "aliases": [ 00:11:47.231 "lvs/lvol" 00:11:47.231 ], 00:11:47.231 "product_name": "Logical Volume", 00:11:47.231 "block_size": 4096, 00:11:47.231 "num_blocks": 38912, 00:11:47.231 "uuid": "60da18a7-d912-4a04-b35c-e1fea041f5ab", 00:11:47.231 "assigned_rate_limits": { 00:11:47.231 "rw_ios_per_sec": 0, 00:11:47.231 "rw_mbytes_per_sec": 0, 00:11:47.231 "r_mbytes_per_sec": 0, 00:11:47.231 "w_mbytes_per_sec": 0 00:11:47.231 }, 00:11:47.231 "claimed": false, 00:11:47.231 "zoned": false, 00:11:47.231 "supported_io_types": { 00:11:47.231 "read": true, 00:11:47.231 "write": true, 00:11:47.231 "unmap": true, 00:11:47.231 "flush": false, 00:11:47.231 "reset": true, 00:11:47.231 "nvme_admin": false, 00:11:47.231 "nvme_io": false, 00:11:47.231 "nvme_io_md": false, 00:11:47.231 "write_zeroes": true, 00:11:47.231 "zcopy": false, 00:11:47.231 "get_zone_info": false, 00:11:47.231 "zone_management": false, 00:11:47.231 "zone_append": false, 00:11:47.231 "compare": false, 00:11:47.231 "compare_and_write": false, 00:11:47.231 "abort": false, 00:11:47.231 "seek_hole": true, 00:11:47.231 "seek_data": true, 00:11:47.231 "copy": false, 00:11:47.231 "nvme_iov_md": false 00:11:47.231 }, 00:11:47.231 "driver_specific": { 00:11:47.231 "lvol": { 00:11:47.231 "lvol_store_uuid": "8e4f6aa3-f5f4-4e49-9481-288b559cbea9", 00:11:47.231 "base_bdev": "aio_bdev", 00:11:47.231 "thin_provision": false, 00:11:47.231 "num_allocated_clusters": 38, 00:11:47.231 "snapshot": false, 00:11:47.231 "clone": false, 00:11:47.231 "esnap_clone": false 00:11:47.231 } 00:11:47.231 } 00:11:47.231 } 00:11:47.231 ] 00:11:47.231 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:47.231 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:47.231 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:47.491 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:47.491 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:47.491 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:47.750 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:47.750 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:47.750 [2024-11-17 08:11:52.682885] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:47.750 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:47.750 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:11:47.750 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:47.750 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:47.750 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.750 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:47.750 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.750 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:47.751 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.751 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:47.751 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:47.751 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:48.010 request: 00:11:48.010 { 00:11:48.010 "uuid": "8e4f6aa3-f5f4-4e49-9481-288b559cbea9", 00:11:48.010 "method": "bdev_lvol_get_lvstores", 00:11:48.010 "req_id": 1 00:11:48.010 } 00:11:48.010 Got JSON-RPC error response 00:11:48.010 response: 00:11:48.010 { 00:11:48.010 "code": -19, 00:11:48.010 "message": "No such device" 00:11:48.010 } 00:11:48.010 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:11:48.010 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:48.010 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:48.010 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:48.010 08:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:48.270 aio_bdev 00:11:48.270 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 60da18a7-d912-4a04-b35c-e1fea041f5ab 00:11:48.270 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=60da18a7-d912-4a04-b35c-e1fea041f5ab 00:11:48.270 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.270 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:48.270 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.270 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.270 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:48.529 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60da18a7-d912-4a04-b35c-e1fea041f5ab -t 2000 00:11:48.529 [ 00:11:48.529 { 00:11:48.529 "name": "60da18a7-d912-4a04-b35c-e1fea041f5ab", 00:11:48.529 "aliases": [ 00:11:48.529 "lvs/lvol" 00:11:48.529 ], 00:11:48.529 "product_name": "Logical Volume", 00:11:48.529 "block_size": 4096, 00:11:48.529 "num_blocks": 38912, 00:11:48.529 "uuid": "60da18a7-d912-4a04-b35c-e1fea041f5ab", 00:11:48.529 "assigned_rate_limits": { 00:11:48.529 "rw_ios_per_sec": 0, 00:11:48.529 "rw_mbytes_per_sec": 0, 00:11:48.529 "r_mbytes_per_sec": 0, 00:11:48.529 "w_mbytes_per_sec": 0 00:11:48.529 }, 00:11:48.529 "claimed": false, 00:11:48.529 "zoned": false, 00:11:48.529 "supported_io_types": { 00:11:48.529 "read": true, 00:11:48.529 "write": true, 00:11:48.529 "unmap": true, 00:11:48.529 "flush": false, 00:11:48.529 "reset": true, 00:11:48.529 "nvme_admin": false, 00:11:48.529 "nvme_io": false, 00:11:48.529 "nvme_io_md": false, 00:11:48.529 "write_zeroes": true, 00:11:48.529 "zcopy": false, 00:11:48.529 "get_zone_info": false, 00:11:48.529 "zone_management": false, 00:11:48.529 "zone_append": false, 00:11:48.529 "compare": false, 00:11:48.529 "compare_and_write": false, 00:11:48.529 "abort": false, 00:11:48.529 "seek_hole": true, 00:11:48.529 "seek_data": true, 00:11:48.529 "copy": false, 00:11:48.529 "nvme_iov_md": false 00:11:48.529 }, 00:11:48.529 "driver_specific": { 00:11:48.529 "lvol": { 00:11:48.529 "lvol_store_uuid": "8e4f6aa3-f5f4-4e49-9481-288b559cbea9", 00:11:48.529 "base_bdev": "aio_bdev", 00:11:48.529 "thin_provision": false, 00:11:48.529 "num_allocated_clusters": 38, 00:11:48.529 "snapshot": false, 00:11:48.529 "clone": false, 00:11:48.529 "esnap_clone": false 00:11:48.529 } 00:11:48.529 } 00:11:48.529 } 00:11:48.529 ] 00:11:48.529 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:48.529 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:48.529 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:48.788 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:48.789 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:48.789 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:49.048 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:49.048 08:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 60da18a7-d912-4a04-b35c-e1fea041f5ab 00:11:49.306 08:11:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e4f6aa3-f5f4-4e49-9481-288b559cbea9 00:11:49.565 08:11:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:49.565 08:11:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:50.134 ************************************ 00:11:50.134 END TEST lvs_grow_dirty 00:11:50.134 ************************************ 00:11:50.134 00:11:50.134 real 0m19.810s 00:11:50.134 user 0m40.610s 00:11:50.134 sys 0m8.000s 00:11:50.134 08:11:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.134 08:11:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:50.134 nvmf_trace.0 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:50.134 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:50.703 rmmod nvme_tcp 00:11:50.703 rmmod nvme_fabrics 00:11:50.703 rmmod nvme_keyring 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 66777 ']' 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 66777 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 66777 ']' 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 66777 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.703 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66777 00:11:50.703 killing process with pid 66777 00:11:50.704 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.704 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.704 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66777' 00:11:50.704 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 66777 00:11:50.704 08:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 66777 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:52.083 08:11:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:52.083 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:52.083 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:52.083 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:52.343 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:52.343 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.343 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.343 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.343 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:11:52.343 00:11:52.343 real 0m42.039s 00:11:52.343 user 1m2.965s 00:11:52.343 sys 0m12.710s 00:11:52.343 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.343 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:52.343 ************************************ 00:11:52.343 END TEST nvmf_lvs_grow 00:11:52.343 ************************************ 00:11:52.343 08:11:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:52.343 08:11:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:52.343 08:11:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.343 08:11:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:52.343 ************************************ 00:11:52.343 START TEST nvmf_bdev_io_wait 00:11:52.343 ************************************ 00:11:52.343 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:52.610 * Looking for test storage... 00:11:52.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.610 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:52.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.611 --rc genhtml_branch_coverage=1 00:11:52.611 --rc genhtml_function_coverage=1 00:11:52.611 --rc genhtml_legend=1 00:11:52.611 --rc geninfo_all_blocks=1 00:11:52.611 --rc geninfo_unexecuted_blocks=1 00:11:52.611 00:11:52.611 ' 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:52.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.611 --rc genhtml_branch_coverage=1 00:11:52.611 --rc genhtml_function_coverage=1 00:11:52.611 --rc genhtml_legend=1 00:11:52.611 --rc geninfo_all_blocks=1 00:11:52.611 --rc geninfo_unexecuted_blocks=1 00:11:52.611 00:11:52.611 ' 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:52.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.611 --rc genhtml_branch_coverage=1 00:11:52.611 --rc genhtml_function_coverage=1 00:11:52.611 --rc genhtml_legend=1 00:11:52.611 --rc geninfo_all_blocks=1 00:11:52.611 --rc geninfo_unexecuted_blocks=1 00:11:52.611 00:11:52.611 ' 00:11:52.611 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:52.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.611 --rc genhtml_branch_coverage=1 00:11:52.611 --rc genhtml_function_coverage=1 00:11:52.611 --rc genhtml_legend=1 00:11:52.612 --rc geninfo_all_blocks=1 00:11:52.612 --rc geninfo_unexecuted_blocks=1 00:11:52.612 00:11:52.612 ' 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.612 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.613 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.613 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.613 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:52.613 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.613 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:52.613 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.613 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.613 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.613 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.613 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.613 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.614 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.614 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:52.615 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:52.616 Cannot find device "nvmf_init_br" 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:52.616 Cannot find device "nvmf_init_br2" 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:52.616 Cannot find device "nvmf_tgt_br" 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:11:52.616 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:52.877 Cannot find device "nvmf_tgt_br2" 00:11:52.877 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:11:52.877 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:52.877 Cannot find device "nvmf_init_br" 00:11:52.877 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:11:52.877 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:52.877 Cannot find device "nvmf_init_br2" 00:11:52.877 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:11:52.877 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:52.877 Cannot find device "nvmf_tgt_br" 00:11:52.877 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:11:52.877 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:52.877 Cannot find device "nvmf_tgt_br2" 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:52.878 Cannot find device "nvmf_br" 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:52.878 Cannot find device "nvmf_init_if" 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:52.878 Cannot find device "nvmf_init_if2" 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:52.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:52.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:52.878 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:53.138 08:11:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:53.138 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:53.138 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:11:53.138 00:11:53.138 --- 10.0.0.3 ping statistics --- 00:11:53.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.138 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:53.138 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:53.138 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:11:53.138 00:11:53.138 --- 10.0.0.4 ping statistics --- 00:11:53.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.138 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:53.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:11:53.138 00:11:53.138 --- 10.0.0.1 ping statistics --- 00:11:53.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.138 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:53.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:11:53.138 00:11:53.138 --- 10.0.0.2 ping statistics --- 00:11:53.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.138 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.138 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67151 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67151 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67151 ']' 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.398 08:11:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:53.398 [2024-11-17 08:11:58.270549] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:53.398 [2024-11-17 08:11:58.270653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.658 [2024-11-17 08:11:58.454613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.658 [2024-11-17 08:11:58.592673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.658 [2024-11-17 08:11:58.592731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.658 [2024-11-17 08:11:58.592748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.658 [2024-11-17 08:11:58.592760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.658 [2024-11-17 08:11:58.592773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.658 [2024-11-17 08:11:58.595330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.658 [2024-11-17 08:11:58.595577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.658 [2024-11-17 08:11:58.596432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.658 [2024-11-17 08:11:58.596494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.226 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:54.486 [2024-11-17 08:11:59.370352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:54.486 [2024-11-17 08:11:59.393310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:54.486 Malloc0 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:54.486 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.746 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:54.746 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.746 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:54.746 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:54.747 [2024-11-17 08:11:59.511745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67197 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67199 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:54.747 { 00:11:54.747 "params": { 00:11:54.747 "name": "Nvme$subsystem", 00:11:54.747 "trtype": "$TEST_TRANSPORT", 00:11:54.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:54.747 "adrfam": "ipv4", 00:11:54.747 "trsvcid": "$NVMF_PORT", 00:11:54.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:54.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:54.747 "hdgst": ${hdgst:-false}, 00:11:54.747 "ddgst": ${ddgst:-false} 00:11:54.747 }, 00:11:54.747 "method": "bdev_nvme_attach_controller" 00:11:54.747 } 00:11:54.747 EOF 00:11:54.747 )") 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67201 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:54.747 { 00:11:54.747 "params": { 00:11:54.747 "name": "Nvme$subsystem", 00:11:54.747 "trtype": "$TEST_TRANSPORT", 00:11:54.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:54.747 "adrfam": "ipv4", 00:11:54.747 "trsvcid": "$NVMF_PORT", 00:11:54.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:54.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:54.747 "hdgst": ${hdgst:-false}, 00:11:54.747 "ddgst": ${ddgst:-false} 00:11:54.747 }, 00:11:54.747 "method": "bdev_nvme_attach_controller" 00:11:54.747 } 00:11:54.747 EOF 00:11:54.747 )") 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67204 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:54.747 { 00:11:54.747 "params": { 00:11:54.747 "name": "Nvme$subsystem", 00:11:54.747 "trtype": "$TEST_TRANSPORT", 00:11:54.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:54.747 "adrfam": "ipv4", 00:11:54.747 "trsvcid": "$NVMF_PORT", 00:11:54.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:54.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:54.747 "hdgst": ${hdgst:-false}, 00:11:54.747 "ddgst": ${ddgst:-false} 00:11:54.747 }, 00:11:54.747 "method": "bdev_nvme_attach_controller" 00:11:54.747 } 00:11:54.747 EOF 00:11:54.747 )") 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:54.747 { 00:11:54.747 "params": { 00:11:54.747 "name": "Nvme$subsystem", 00:11:54.747 "trtype": "$TEST_TRANSPORT", 00:11:54.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:54.747 "adrfam": "ipv4", 00:11:54.747 "trsvcid": "$NVMF_PORT", 00:11:54.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:54.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:54.747 "hdgst": ${hdgst:-false}, 00:11:54.747 "ddgst": ${ddgst:-false} 00:11:54.747 }, 00:11:54.747 "method": "bdev_nvme_attach_controller" 00:11:54.747 } 00:11:54.747 EOF 00:11:54.747 )") 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:54.747 "params": { 00:11:54.747 "name": "Nvme1", 00:11:54.747 "trtype": "tcp", 00:11:54.747 "traddr": "10.0.0.3", 00:11:54.747 "adrfam": "ipv4", 00:11:54.747 "trsvcid": "4420", 00:11:54.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:54.747 "hdgst": false, 00:11:54.747 "ddgst": false 00:11:54.747 }, 00:11:54.747 "method": "bdev_nvme_attach_controller" 00:11:54.747 }' 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:54.747 "params": { 00:11:54.747 "name": "Nvme1", 00:11:54.747 "trtype": "tcp", 00:11:54.747 "traddr": "10.0.0.3", 00:11:54.747 "adrfam": "ipv4", 00:11:54.747 "trsvcid": "4420", 00:11:54.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:54.747 "hdgst": false, 00:11:54.747 "ddgst": false 00:11:54.747 }, 00:11:54.747 "method": "bdev_nvme_attach_controller" 00:11:54.747 }' 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:54.747 "params": { 00:11:54.747 "name": "Nvme1", 00:11:54.747 "trtype": "tcp", 00:11:54.747 "traddr": "10.0.0.3", 00:11:54.747 "adrfam": "ipv4", 00:11:54.747 "trsvcid": "4420", 00:11:54.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:54.747 "hdgst": false, 00:11:54.747 "ddgst": false 00:11:54.747 }, 00:11:54.747 "method": "bdev_nvme_attach_controller" 00:11:54.747 }' 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:54.747 "params": { 00:11:54.747 "name": "Nvme1", 00:11:54.747 "trtype": "tcp", 00:11:54.747 "traddr": "10.0.0.3", 00:11:54.747 "adrfam": "ipv4", 00:11:54.747 "trsvcid": "4420", 00:11:54.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:54.747 "hdgst": false, 00:11:54.747 "ddgst": false 00:11:54.747 }, 00:11:54.747 "method": "bdev_nvme_attach_controller" 00:11:54.747 }' 00:11:54.747 08:11:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67197 00:11:54.747 [2024-11-17 08:11:59.634229] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:54.747 [2024-11-17 08:11:59.634476] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:54.748 [2024-11-17 08:11:59.636066] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:54.748 [2024-11-17 08:11:59.636190] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:54.748 [2024-11-17 08:11:59.643078] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:54.748 [2024-11-17 08:11:59.643188] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:54.748 [2024-11-17 08:11:59.653508] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:54.748 [2024-11-17 08:11:59.653616] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:55.009 [2024-11-17 08:11:59.869606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.009 [2024-11-17 08:11:59.999259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.009 [2024-11-17 08:12:00.007854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:55.268 [2024-11-17 08:12:00.112544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:55.268 [2024-11-17 08:12:00.125104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.268 [2024-11-17 08:12:00.206221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.268 [2024-11-17 08:12:00.234367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.268 [2024-11-17 08:12:00.255769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:55.528 [2024-11-17 08:12:00.288613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.528 [2024-11-17 08:12:00.359845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:55.528 Running I/O for 1 seconds... 00:11:55.528 [2024-11-17 08:12:00.441832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.528 Running I/O for 1 seconds... 00:11:55.787 [2024-11-17 08:12:00.543542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.787 Running I/O for 1 seconds... 00:11:55.787 Running I/O for 1 seconds... 00:11:56.723 198600.00 IOPS, 775.78 MiB/s 00:11:56.723 Latency(us) 00:11:56.723 [2024-11-17T08:12:01.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.723 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:56.723 Nvme1n1 : 1.00 198260.25 774.45 0.00 0.00 642.66 337.22 2210.85 00:11:56.723 [2024-11-17T08:12:01.735Z] =================================================================================================================== 00:11:56.723 [2024-11-17T08:12:01.735Z] Total : 198260.25 774.45 0.00 0.00 642.66 337.22 2210.85 00:11:56.723 6984.00 IOPS, 27.28 MiB/s 00:11:56.723 Latency(us) 00:11:56.723 [2024-11-17T08:12:01.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.723 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:56.723 Nvme1n1 : 1.02 7006.71 27.37 0.00 0.00 18128.06 6843.12 27372.47 00:11:56.723 [2024-11-17T08:12:01.735Z] =================================================================================================================== 00:11:56.723 [2024-11-17T08:12:01.735Z] Total : 7006.71 27.37 0.00 0.00 18128.06 6843.12 27372.47 00:11:56.723 5509.00 IOPS, 21.52 MiB/s 00:11:56.723 Latency(us) 00:11:56.723 [2024-11-17T08:12:01.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.723 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:56.723 Nvme1n1 : 1.01 5584.05 21.81 0.00 0.00 22794.89 3947.95 40216.47 00:11:56.724 [2024-11-17T08:12:01.736Z] =================================================================================================================== 00:11:56.724 [2024-11-17T08:12:01.736Z] Total : 5584.05 21.81 0.00 0.00 22794.89 3947.95 40216.47 00:11:56.983 6062.00 IOPS, 23.68 MiB/s 00:11:56.983 Latency(us) 00:11:56.983 [2024-11-17T08:12:01.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.983 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:56.983 Nvme1n1 : 1.01 6132.35 23.95 0.00 0.00 20754.55 9843.56 39795.35 00:11:56.983 [2024-11-17T08:12:01.995Z] =================================================================================================================== 00:11:56.983 [2024-11-17T08:12:01.995Z] Total : 6132.35 23.95 0.00 0.00 20754.55 9843.56 39795.35 00:11:57.552 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67199 00:11:57.552 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67201 00:11:57.552 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67204 00:11:57.552 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.552 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.552 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:57.552 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.552 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:57.552 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:57.552 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:57.552 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:57.812 rmmod nvme_tcp 00:11:57.812 rmmod nvme_fabrics 00:11:57.812 rmmod nvme_keyring 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67151 ']' 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67151 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67151 ']' 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67151 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67151 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67151' 00:11:57.812 killing process with pid 67151 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67151 00:11:57.812 08:12:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67151 00:11:58.748 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.748 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.748 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.748 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:58.748 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:58.748 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.748 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.748 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.748 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:58.748 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:59.008 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:59.008 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:59.008 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.008 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:59.008 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:59.008 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:59.008 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:59.008 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:59.008 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:59.008 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:59.008 08:12:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.008 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.268 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:59.268 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.268 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.268 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.268 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:11:59.268 00:11:59.268 real 0m6.836s 00:11:59.268 user 0m27.546s 00:11:59.268 sys 0m3.437s 00:11:59.268 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.269 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:59.269 ************************************ 00:11:59.269 END TEST nvmf_bdev_io_wait 00:11:59.269 ************************************ 00:11:59.269 08:12:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:59.269 08:12:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.269 08:12:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.269 08:12:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:59.269 ************************************ 00:11:59.269 START TEST nvmf_queue_depth 00:11:59.269 ************************************ 00:11:59.269 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:59.529 * Looking for test storage... 00:11:59.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:59.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.530 --rc genhtml_branch_coverage=1 00:11:59.530 --rc genhtml_function_coverage=1 00:11:59.530 --rc genhtml_legend=1 00:11:59.530 --rc geninfo_all_blocks=1 00:11:59.530 --rc geninfo_unexecuted_blocks=1 00:11:59.530 00:11:59.530 ' 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:59.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.530 --rc genhtml_branch_coverage=1 00:11:59.530 --rc genhtml_function_coverage=1 00:11:59.530 --rc genhtml_legend=1 00:11:59.530 --rc geninfo_all_blocks=1 00:11:59.530 --rc geninfo_unexecuted_blocks=1 00:11:59.530 00:11:59.530 ' 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:59.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.530 --rc genhtml_branch_coverage=1 00:11:59.530 --rc genhtml_function_coverage=1 00:11:59.530 --rc genhtml_legend=1 00:11:59.530 --rc geninfo_all_blocks=1 00:11:59.530 --rc geninfo_unexecuted_blocks=1 00:11:59.530 00:11:59.530 ' 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:59.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.530 --rc genhtml_branch_coverage=1 00:11:59.530 --rc genhtml_function_coverage=1 00:11:59.530 --rc genhtml_legend=1 00:11:59.530 --rc geninfo_all_blocks=1 00:11:59.530 --rc geninfo_unexecuted_blocks=1 00:11:59.530 00:11:59.530 ' 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.530 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.530 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:59.531 Cannot find device "nvmf_init_br" 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:59.531 Cannot find device "nvmf_init_br2" 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:59.531 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:59.791 Cannot find device "nvmf_tgt_br" 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.791 Cannot find device "nvmf_tgt_br2" 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:59.791 Cannot find device "nvmf_init_br" 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:59.791 Cannot find device "nvmf_init_br2" 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:59.791 Cannot find device "nvmf_tgt_br" 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:59.791 Cannot find device "nvmf_tgt_br2" 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:59.791 Cannot find device "nvmf_br" 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:59.791 Cannot find device "nvmf_init_if" 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:59.791 Cannot find device "nvmf_init_if2" 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:59.791 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:00.051 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:00.052 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:00.052 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.139 ms 00:12:00.052 00:12:00.052 --- 10.0.0.3 ping statistics --- 00:12:00.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.052 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:00.052 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:00.052 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:12:00.052 00:12:00.052 --- 10.0.0.4 ping statistics --- 00:12:00.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.052 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:00.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:12:00.052 00:12:00.052 --- 10.0.0.1 ping statistics --- 00:12:00.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.052 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:00.052 08:12:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:00.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:12:00.052 00:12:00.052 --- 10.0.0.2 ping statistics --- 00:12:00.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.052 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=67516 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 67516 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67516 ']' 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.052 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:00.312 [2024-11-17 08:12:05.160128] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:00.312 [2024-11-17 08:12:05.160262] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.572 [2024-11-17 08:12:05.351918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.572 [2024-11-17 08:12:05.456168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.572 [2024-11-17 08:12:05.456224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.572 [2024-11-17 08:12:05.456239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.572 [2024-11-17 08:12:05.456259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.572 [2024-11-17 08:12:05.456271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.572 [2024-11-17 08:12:05.457566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.831 [2024-11-17 08:12:05.644809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:01.091 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.091 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:01.091 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:01.091 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.091 08:12:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:01.091 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.091 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.091 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.091 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:01.091 [2024-11-17 08:12:06.028917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.091 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.091 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:01.091 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.091 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:01.351 Malloc0 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:01.351 [2024-11-17 08:12:06.166105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67548 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67548 /var/tmp/bdevperf.sock 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67548 ']' 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.351 08:12:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:01.351 [2024-11-17 08:12:06.276452] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:01.351 [2024-11-17 08:12:06.276599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67548 ] 00:12:01.611 [2024-11-17 08:12:06.464890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.611 [2024-11-17 08:12:06.569816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.870 [2024-11-17 08:12:06.776927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:02.130 08:12:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.130 08:12:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:02.130 08:12:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:02.130 08:12:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.130 08:12:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:02.389 NVMe0n1 00:12:02.389 08:12:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.389 08:12:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:02.389 Running I/O for 10 seconds... 00:12:04.265 8192.00 IOPS, 32.00 MiB/s [2024-11-17T08:12:10.659Z] 8713.50 IOPS, 34.04 MiB/s [2024-11-17T08:12:11.598Z] 9021.00 IOPS, 35.24 MiB/s [2024-11-17T08:12:12.554Z] 9240.50 IOPS, 36.10 MiB/s [2024-11-17T08:12:13.581Z] 9408.80 IOPS, 36.75 MiB/s [2024-11-17T08:12:14.520Z] 9513.17 IOPS, 37.16 MiB/s [2024-11-17T08:12:15.458Z] 9604.43 IOPS, 37.52 MiB/s [2024-11-17T08:12:16.396Z] 9662.25 IOPS, 37.74 MiB/s [2024-11-17T08:12:17.334Z] 9760.78 IOPS, 38.13 MiB/s [2024-11-17T08:12:17.334Z] 9829.80 IOPS, 38.40 MiB/s 00:12:12.322 Latency(us) 00:12:12.322 [2024-11-17T08:12:17.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.322 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:12.322 Verification LBA range: start 0x0 length 0x4000 00:12:12.322 NVMe0n1 : 10.08 9844.84 38.46 0.00 0.00 103576.01 19581.84 77064.02 00:12:12.322 [2024-11-17T08:12:17.334Z] =================================================================================================================== 00:12:12.322 [2024-11-17T08:12:17.334Z] Total : 9844.84 38.46 0.00 0.00 103576.01 19581.84 77064.02 00:12:12.586 { 00:12:12.586 "results": [ 00:12:12.586 { 00:12:12.586 "job": "NVMe0n1", 00:12:12.586 "core_mask": "0x1", 00:12:12.586 "workload": "verify", 00:12:12.586 "status": "finished", 00:12:12.586 "verify_range": { 00:12:12.586 "start": 0, 00:12:12.586 "length": 16384 00:12:12.586 }, 00:12:12.586 "queue_depth": 1024, 00:12:12.586 "io_size": 4096, 00:12:12.586 "runtime": 10.080611, 00:12:12.586 "iops": 9844.839762192985, 00:12:12.586 "mibps": 38.45640532106635, 00:12:12.586 "io_failed": 0, 00:12:12.586 "io_timeout": 0, 00:12:12.586 "avg_latency_us": 103576.01235867474, 00:12:12.586 "min_latency_us": 19581.840963855422, 00:12:12.586 "max_latency_us": 77064.01927710844 00:12:12.586 } 00:12:12.586 ], 00:12:12.586 "core_count": 1 00:12:12.586 } 00:12:12.586 08:12:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67548 00:12:12.586 08:12:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67548 ']' 00:12:12.586 08:12:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67548 00:12:12.586 08:12:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:12.586 08:12:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.586 08:12:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67548 00:12:12.586 08:12:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.586 08:12:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.586 killing process with pid 67548 00:12:12.586 08:12:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67548' 00:12:12.586 Received shutdown signal, test time was about 10.000000 seconds 00:12:12.586 00:12:12.586 Latency(us) 00:12:12.586 [2024-11-17T08:12:17.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.586 [2024-11-17T08:12:17.598Z] =================================================================================================================== 00:12:12.586 [2024-11-17T08:12:17.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:12.586 08:12:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67548 00:12:12.586 08:12:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67548 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.525 rmmod nvme_tcp 00:12:13.525 rmmod nvme_fabrics 00:12:13.525 rmmod nvme_keyring 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 67516 ']' 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 67516 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67516 ']' 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67516 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67516 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:13.525 killing process with pid 67516 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67516' 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67516 00:12:13.525 08:12:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67516 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:14.903 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:15.163 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:15.163 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:15.163 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.163 08:12:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.163 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:15.163 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.163 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.163 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.163 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:12:15.163 00:12:15.163 real 0m15.896s 00:12:15.163 user 0m25.227s 00:12:15.163 sys 0m3.107s 00:12:15.163 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.163 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:15.163 ************************************ 00:12:15.163 END TEST nvmf_queue_depth 00:12:15.163 ************************************ 00:12:15.163 08:12:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:15.163 08:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.163 08:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.163 08:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:15.163 ************************************ 00:12:15.163 START TEST nvmf_target_multipath 00:12:15.163 ************************************ 00:12:15.163 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:15.423 * Looking for test storage... 00:12:15.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:15.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.423 --rc genhtml_branch_coverage=1 00:12:15.423 --rc genhtml_function_coverage=1 00:12:15.423 --rc genhtml_legend=1 00:12:15.423 --rc geninfo_all_blocks=1 00:12:15.423 --rc geninfo_unexecuted_blocks=1 00:12:15.423 00:12:15.423 ' 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:15.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.423 --rc genhtml_branch_coverage=1 00:12:15.423 --rc genhtml_function_coverage=1 00:12:15.423 --rc genhtml_legend=1 00:12:15.423 --rc geninfo_all_blocks=1 00:12:15.423 --rc geninfo_unexecuted_blocks=1 00:12:15.423 00:12:15.423 ' 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:15.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.423 --rc genhtml_branch_coverage=1 00:12:15.423 --rc genhtml_function_coverage=1 00:12:15.423 --rc genhtml_legend=1 00:12:15.423 --rc geninfo_all_blocks=1 00:12:15.423 --rc geninfo_unexecuted_blocks=1 00:12:15.423 00:12:15.423 ' 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:15.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.423 --rc genhtml_branch_coverage=1 00:12:15.423 --rc genhtml_function_coverage=1 00:12:15.423 --rc genhtml_legend=1 00:12:15.423 --rc geninfo_all_blocks=1 00:12:15.423 --rc geninfo_unexecuted_blocks=1 00:12:15.423 00:12:15.423 ' 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:15.423 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.424 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.424 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:15.684 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:15.685 Cannot find device "nvmf_init_br" 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:15.685 Cannot find device "nvmf_init_br2" 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:15.685 Cannot find device "nvmf_tgt_br" 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:15.685 Cannot find device "nvmf_tgt_br2" 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:15.685 Cannot find device "nvmf_init_br" 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:15.685 Cannot find device "nvmf_init_br2" 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:15.685 Cannot find device "nvmf_tgt_br" 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:15.685 Cannot find device "nvmf_tgt_br2" 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:15.685 Cannot find device "nvmf_br" 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:15.685 Cannot find device "nvmf_init_if" 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:15.685 Cannot find device "nvmf_init_if2" 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:15.685 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:15.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:15.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 00:12:15.945 00:12:15.945 --- 10.0.0.3 ping statistics --- 00:12:15.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.945 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:15.945 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:15.945 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:12:15.945 00:12:15.945 --- 10.0.0.4 ping statistics --- 00:12:15.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.945 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:15.945 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:16.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:12:16.205 00:12:16.205 --- 10.0.0.1 ping statistics --- 00:12:16.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.205 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:16.205 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:16.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:12:16.205 00:12:16.205 --- 10.0.0.2 ping statistics --- 00:12:16.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.205 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:16.205 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.205 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:12:16.205 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:16.205 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.205 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:16.205 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:16.205 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.205 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:16.205 08:12:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=67943 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 67943 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 67943 ']' 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.205 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:16.205 [2024-11-17 08:12:21.129060] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:16.205 [2024-11-17 08:12:21.129171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.465 [2024-11-17 08:12:21.321405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.465 [2024-11-17 08:12:21.468917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.465 [2024-11-17 08:12:21.468990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.465 [2024-11-17 08:12:21.469009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.465 [2024-11-17 08:12:21.469021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.465 [2024-11-17 08:12:21.469035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.465 [2024-11-17 08:12:21.471596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.465 [2024-11-17 08:12:21.471830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.465 [2024-11-17 08:12:21.472402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.465 [2024-11-17 08:12:21.472468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.725 [2024-11-17 08:12:21.712664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:16.984 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.984 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:12:16.984 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.984 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.984 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:17.243 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.243 08:12:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:17.243 [2024-11-17 08:12:22.187929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.243 08:12:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:12:17.502 Malloc0 00:12:17.762 08:12:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:12:17.762 08:12:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:18.021 08:12:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:18.280 [2024-11-17 08:12:23.063128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:18.280 08:12:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:12:18.280 [2024-11-17 08:12:23.267031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:12:18.280 08:12:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:12:18.539 08:12:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:12:18.798 08:12:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.798 08:12:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:12:18.798 08:12:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.798 08:12:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:18.798 08:12:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:20.704 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:20.705 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:20.705 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:12:20.705 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68033 00:12:20.705 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:20.705 08:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:12:20.705 [global] 00:12:20.705 thread=1 00:12:20.705 invalidate=1 00:12:20.705 rw=randrw 00:12:20.705 time_based=1 00:12:20.705 runtime=6 00:12:20.705 ioengine=libaio 00:12:20.705 direct=1 00:12:20.705 bs=4096 00:12:20.705 iodepth=128 00:12:20.705 norandommap=0 00:12:20.705 numjobs=1 00:12:20.705 00:12:20.705 verify_dump=1 00:12:20.705 verify_backlog=512 00:12:20.705 verify_state_save=0 00:12:20.705 do_verify=1 00:12:20.705 verify=crc32c-intel 00:12:20.705 [job0] 00:12:20.705 filename=/dev/nvme0n1 00:12:20.705 Could not set queue depth (nvme0n1) 00:12:20.964 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:20.964 fio-3.35 00:12:20.964 Starting 1 thread 00:12:21.902 08:12:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:21.902 08:12:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:22.162 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:22.422 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:22.681 08:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68033 00:12:27.957 00:12:27.957 job0: (groupid=0, jobs=1): err= 0: pid=68058: Sun Nov 17 08:12:32 2024 00:12:27.957 read: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(247MiB/6004msec) 00:12:27.957 slat (usec): min=6, max=10892, avg=51.16, stdev=186.34 00:12:27.957 clat (usec): min=748, max=18116, avg=8236.76, stdev=1533.65 00:12:27.957 lat (usec): min=775, max=18159, avg=8287.92, stdev=1540.49 00:12:27.957 clat percentiles (usec): 00:12:27.957 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6783], 20.00th=[ 7439], 00:12:27.957 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8356], 00:12:27.957 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 9634], 95.00th=[11863], 00:12:27.957 | 99.00th=[12911], 99.50th=[13435], 99.90th=[14746], 99.95th=[15139], 00:12:27.957 | 99.99th=[15401] 00:12:27.957 bw ( KiB/s): min=11808, max=27736, per=52.82%, avg=22246.73, stdev=5875.74, samples=11 00:12:27.957 iops : min= 2952, max= 6934, avg=5561.64, stdev=1468.90, samples=11 00:12:27.957 write: IOPS=6302, BW=24.6MiB/s (25.8MB/s)(132MiB/5359msec); 0 zone resets 00:12:27.957 slat (usec): min=20, max=1884, avg=65.03, stdev=115.76 00:12:27.957 clat (usec): min=511, max=14932, avg=7115.03, stdev=1329.42 00:12:27.957 lat (usec): min=561, max=14980, avg=7180.06, stdev=1332.56 00:12:27.957 clat percentiles (usec): 00:12:27.957 | 1.00th=[ 4228], 5.00th=[ 4883], 10.00th=[ 5342], 20.00th=[ 6259], 00:12:27.957 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7439], 00:12:27.957 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8848], 00:12:27.957 | 99.00th=[11731], 99.50th=[12911], 99.90th=[14484], 99.95th=[14746], 00:12:27.957 | 99.99th=[14877] 00:12:27.957 bw ( KiB/s): min=12288, max=26856, per=88.24%, avg=22247.73, stdev=5392.74, samples=11 00:12:27.957 iops : min= 3072, max= 6714, avg=5561.82, stdev=1348.09, samples=11 00:12:27.957 lat (usec) : 750=0.01%, 1000=0.01% 00:12:27.957 lat (msec) : 2=0.09%, 4=0.38%, 10=92.69%, 20=6.84% 00:12:27.957 cpu : usr=7.60%, sys=31.20%, ctx=5811, majf=0, minf=102 00:12:27.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:27.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:27.957 issued rwts: total=63213,33776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:27.957 00:12:27.957 Run status group 0 (all jobs): 00:12:27.957 READ: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=247MiB (259MB), run=6004-6004msec 00:12:27.957 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=132MiB (138MB), run=5359-5359msec 00:12:27.957 00:12:27.957 Disk stats (read/write): 00:12:27.957 nvme0n1: ios=62450/33044, merge=0/0, ticks=478039/212187, in_queue=690226, util=98.68% 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68134 00:12:27.957 08:12:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:12:27.957 [global] 00:12:27.957 thread=1 00:12:27.957 invalidate=1 00:12:27.957 rw=randrw 00:12:27.957 time_based=1 00:12:27.957 runtime=6 00:12:27.957 ioengine=libaio 00:12:27.957 direct=1 00:12:27.957 bs=4096 00:12:27.957 iodepth=128 00:12:27.957 norandommap=0 00:12:27.957 numjobs=1 00:12:27.957 00:12:27.957 verify_dump=1 00:12:27.957 verify_backlog=512 00:12:27.957 verify_state_save=0 00:12:27.957 do_verify=1 00:12:27.957 verify=crc32c-intel 00:12:27.957 [job0] 00:12:27.958 filename=/dev/nvme0n1 00:12:27.958 Could not set queue depth (nvme0n1) 00:12:27.958 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:27.958 fio-3.35 00:12:27.958 Starting 1 thread 00:12:28.526 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:28.785 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:29.045 08:12:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:29.304 08:12:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68134 00:12:34.580 00:12:34.580 job0: (groupid=0, jobs=1): err= 0: pid=68165: Sun Nov 17 08:12:38 2024 00:12:34.580 read: IOPS=10.7k, BW=41.7MiB/s (43.8MB/s)(251MiB/6002msec) 00:12:34.580 slat (usec): min=6, max=6019, avg=45.25, stdev=155.96 00:12:34.580 clat (usec): min=329, max=21850, avg=8139.85, stdev=2680.91 00:12:34.580 lat (usec): min=350, max=21866, avg=8185.10, stdev=2682.90 00:12:34.580 clat percentiles (usec): 00:12:34.580 | 1.00th=[ 1582], 5.00th=[ 3523], 10.00th=[ 5538], 20.00th=[ 7111], 00:12:34.580 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 8160], 00:12:34.580 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[11469], 95.00th=[13566], 00:12:34.580 | 99.00th=[17171], 99.50th=[18744], 99.90th=[20055], 99.95th=[20317], 00:12:34.580 | 99.99th=[21365] 00:12:34.580 bw ( KiB/s): min= 2344, max=27784, per=52.40%, avg=22397.82, stdev=8072.12, samples=11 00:12:34.580 iops : min= 586, max= 6946, avg=5599.45, stdev=2018.03, samples=11 00:12:34.580 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(134MiB/5191msec); 0 zone resets 00:12:34.580 slat (usec): min=17, max=1920, avg=58.92, stdev=101.63 00:12:34.580 clat (usec): min=448, max=19230, avg=6973.62, stdev=2397.69 00:12:34.580 lat (usec): min=482, max=19266, avg=7032.54, stdev=2399.53 00:12:34.580 clat percentiles (usec): 00:12:34.580 | 1.00th=[ 1385], 5.00th=[ 2573], 10.00th=[ 4293], 20.00th=[ 5538], 00:12:34.580 | 30.00th=[ 6390], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7308], 00:12:34.580 | 70.00th=[ 7570], 80.00th=[ 7898], 90.00th=[ 9110], 95.00th=[11863], 00:12:34.580 | 99.00th=[14615], 99.50th=[15270], 99.90th=[17957], 99.95th=[18482], 00:12:34.580 | 99.99th=[19006] 00:12:34.580 bw ( KiB/s): min= 2576, max=28782, per=84.95%, avg=22458.00, stdev=7990.76, samples=11 00:12:34.580 iops : min= 644, max= 7195, avg=5614.45, stdev=1997.65, samples=11 00:12:34.580 lat (usec) : 500=0.02%, 750=0.09%, 1000=0.15% 00:12:34.580 lat (msec) : 2=2.17%, 4=4.44%, 10=81.72%, 20=11.34%, 50=0.07% 00:12:34.581 cpu : usr=7.12%, sys=31.94%, ctx=6529, majf=0, minf=139 00:12:34.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:34.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.581 issued rwts: total=64132,34309,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.581 00:12:34.581 Run status group 0 (all jobs): 00:12:34.581 READ: bw=41.7MiB/s (43.8MB/s), 41.7MiB/s-41.7MiB/s (43.8MB/s-43.8MB/s), io=251MiB (263MB), run=6002-6002msec 00:12:34.581 WRITE: bw=25.8MiB/s (27.1MB/s), 25.8MiB/s-25.8MiB/s (27.1MB/s-27.1MB/s), io=134MiB (141MB), run=5191-5191msec 00:12:34.581 00:12:34.581 Disk stats (read/write): 00:12:34.581 nvme0n1: ios=63450/33560, merge=0/0, ticks=485317/213263, in_queue=698580, util=98.70% 00:12:34.581 08:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:34.581 08:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.581 08:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:12:34.581 08:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:34.581 08:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.581 08:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:34.581 08:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.581 08:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:12:34.581 08:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.581 rmmod nvme_tcp 00:12:34.581 rmmod nvme_fabrics 00:12:34.581 rmmod nvme_keyring 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 67943 ']' 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 67943 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 67943 ']' 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 67943 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67943 00:12:34.581 killing process with pid 67943 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67943' 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 67943 00:12:34.581 08:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 67943 00:12:35.960 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.960 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.960 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.960 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:35.960 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:35.960 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:35.961 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:36.221 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:36.221 08:12:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:36.221 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:36.221 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.221 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.221 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.221 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:12:36.221 ************************************ 00:12:36.221 END TEST nvmf_target_multipath 00:12:36.221 ************************************ 00:12:36.221 00:12:36.221 real 0m20.929s 00:12:36.221 user 1m15.083s 00:12:36.221 sys 0m10.280s 00:12:36.221 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.221 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:36.221 08:12:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:36.221 08:12:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:36.221 08:12:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.221 08:12:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:36.221 ************************************ 00:12:36.221 START TEST nvmf_zcopy 00:12:36.221 ************************************ 00:12:36.221 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:36.481 * Looking for test storage... 00:12:36.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.481 --rc genhtml_branch_coverage=1 00:12:36.481 --rc genhtml_function_coverage=1 00:12:36.481 --rc genhtml_legend=1 00:12:36.481 --rc geninfo_all_blocks=1 00:12:36.481 --rc geninfo_unexecuted_blocks=1 00:12:36.481 00:12:36.481 ' 00:12:36.481 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.481 --rc genhtml_branch_coverage=1 00:12:36.481 --rc genhtml_function_coverage=1 00:12:36.481 --rc genhtml_legend=1 00:12:36.482 --rc geninfo_all_blocks=1 00:12:36.482 --rc geninfo_unexecuted_blocks=1 00:12:36.482 00:12:36.482 ' 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:36.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.482 --rc genhtml_branch_coverage=1 00:12:36.482 --rc genhtml_function_coverage=1 00:12:36.482 --rc genhtml_legend=1 00:12:36.482 --rc geninfo_all_blocks=1 00:12:36.482 --rc geninfo_unexecuted_blocks=1 00:12:36.482 00:12:36.482 ' 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:36.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.482 --rc genhtml_branch_coverage=1 00:12:36.482 --rc genhtml_function_coverage=1 00:12:36.482 --rc genhtml_legend=1 00:12:36.482 --rc geninfo_all_blocks=1 00:12:36.482 --rc geninfo_unexecuted_blocks=1 00:12:36.482 00:12:36.482 ' 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.482 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:36.482 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:36.742 Cannot find device "nvmf_init_br" 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:36.742 Cannot find device "nvmf_init_br2" 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:36.742 Cannot find device "nvmf_tgt_br" 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:36.742 Cannot find device "nvmf_tgt_br2" 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:36.742 Cannot find device "nvmf_init_br" 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:36.742 Cannot find device "nvmf_init_br2" 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:36.742 Cannot find device "nvmf_tgt_br" 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:36.742 Cannot find device "nvmf_tgt_br2" 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:36.742 Cannot find device "nvmf_br" 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:36.742 Cannot find device "nvmf_init_if" 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:36.742 Cannot find device "nvmf_init_if2" 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:36.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:36.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:36.742 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:37.002 08:12:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:37.002 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:37.002 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:12:37.003 00:12:37.003 --- 10.0.0.3 ping statistics --- 00:12:37.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.003 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:37.003 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:37.262 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:37.262 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:12:37.262 00:12:37.262 --- 10.0.0.4 ping statistics --- 00:12:37.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.262 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:37.262 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:37.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:37.262 00:12:37.262 --- 10.0.0.1 ping statistics --- 00:12:37.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.262 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:37.262 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:37.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:12:37.262 00:12:37.262 --- 10.0.0.2 ping statistics --- 00:12:37.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.262 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:37.262 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.262 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:12:37.262 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:37.262 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.262 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:37.262 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:37.262 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.262 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=68477 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 68477 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 68477 ']' 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.263 08:12:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:37.263 [2024-11-17 08:12:42.206632] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:37.263 [2024-11-17 08:12:42.206773] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.522 [2024-11-17 08:12:42.394747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.781 [2024-11-17 08:12:42.538090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.781 [2024-11-17 08:12:42.538306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.781 [2024-11-17 08:12:42.538331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.781 [2024-11-17 08:12:42.538354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.781 [2024-11-17 08:12:42.538368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.781 [2024-11-17 08:12:42.539842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.781 [2024-11-17 08:12:42.780113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:38.041 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.041 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:12:38.041 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.041 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.041 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.324 [2024-11-17 08:12:43.087012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.324 [2024-11-17 08:12:43.103215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.324 malloc0 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:38.324 { 00:12:38.324 "params": { 00:12:38.324 "name": "Nvme$subsystem", 00:12:38.324 "trtype": "$TEST_TRANSPORT", 00:12:38.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:38.324 "adrfam": "ipv4", 00:12:38.324 "trsvcid": "$NVMF_PORT", 00:12:38.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:38.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:38.324 "hdgst": ${hdgst:-false}, 00:12:38.324 "ddgst": ${ddgst:-false} 00:12:38.324 }, 00:12:38.324 "method": "bdev_nvme_attach_controller" 00:12:38.324 } 00:12:38.324 EOF 00:12:38.324 )") 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:38.324 08:12:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:38.324 "params": { 00:12:38.324 "name": "Nvme1", 00:12:38.324 "trtype": "tcp", 00:12:38.324 "traddr": "10.0.0.3", 00:12:38.324 "adrfam": "ipv4", 00:12:38.324 "trsvcid": "4420", 00:12:38.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:38.324 "hdgst": false, 00:12:38.324 "ddgst": false 00:12:38.324 }, 00:12:38.324 "method": "bdev_nvme_attach_controller" 00:12:38.324 }' 00:12:38.324 [2024-11-17 08:12:43.279434] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:38.324 [2024-11-17 08:12:43.279751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68510 ] 00:12:38.626 [2024-11-17 08:12:43.475602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.626 [2024-11-17 08:12:43.573214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.899 [2024-11-17 08:12:43.768902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:39.158 Running I/O for 10 seconds... 00:12:41.034 6907.00 IOPS, 53.96 MiB/s [2024-11-17T08:12:46.984Z] 6953.50 IOPS, 54.32 MiB/s [2024-11-17T08:12:48.362Z] 6997.33 IOPS, 54.67 MiB/s [2024-11-17T08:12:49.305Z] 7013.25 IOPS, 54.79 MiB/s [2024-11-17T08:12:50.250Z] 7032.80 IOPS, 54.94 MiB/s [2024-11-17T08:12:51.188Z] 7040.83 IOPS, 55.01 MiB/s [2024-11-17T08:12:52.127Z] 7030.00 IOPS, 54.92 MiB/s [2024-11-17T08:12:53.072Z] 7032.62 IOPS, 54.94 MiB/s [2024-11-17T08:12:54.008Z] 7032.67 IOPS, 54.94 MiB/s [2024-11-17T08:12:54.008Z] 7037.80 IOPS, 54.98 MiB/s 00:12:48.996 Latency(us) 00:12:48.996 [2024-11-17T08:12:54.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.996 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:48.996 Verification LBA range: start 0x0 length 0x1000 00:12:48.996 Nvme1n1 : 10.01 7039.11 54.99 0.00 0.00 18135.20 1204.13 25582.73 00:12:48.996 [2024-11-17T08:12:54.008Z] =================================================================================================================== 00:12:48.996 [2024-11-17T08:12:54.008Z] Total : 7039.11 54.99 0.00 0.00 18135.20 1204.13 25582.73 00:12:50.376 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68639 00:12:50.376 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:50.376 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.376 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:50.376 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:50.376 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:50.376 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:50.376 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:50.376 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:50.376 { 00:12:50.376 "params": { 00:12:50.376 "name": "Nvme$subsystem", 00:12:50.376 "trtype": "$TEST_TRANSPORT", 00:12:50.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.376 "adrfam": "ipv4", 00:12:50.376 "trsvcid": "$NVMF_PORT", 00:12:50.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.376 "hdgst": ${hdgst:-false}, 00:12:50.376 "ddgst": ${ddgst:-false} 00:12:50.376 }, 00:12:50.376 "method": "bdev_nvme_attach_controller" 00:12:50.376 } 00:12:50.376 EOF 00:12:50.376 )") 00:12:50.376 [2024-11-17 08:12:55.161263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.376 [2024-11-17 08:12:55.161330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.376 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:50.376 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:50.376 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:50.377 08:12:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:50.377 "params": { 00:12:50.377 "name": "Nvme1", 00:12:50.377 "trtype": "tcp", 00:12:50.377 "traddr": "10.0.0.3", 00:12:50.377 "adrfam": "ipv4", 00:12:50.377 "trsvcid": "4420", 00:12:50.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:50.377 "hdgst": false, 00:12:50.377 "ddgst": false 00:12:50.377 }, 00:12:50.377 "method": "bdev_nvme_attach_controller" 00:12:50.377 }' 00:12:50.377 [2024-11-17 08:12:55.177160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.177213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.193134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.193333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.205107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.205160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.217113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.217157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.229098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.229156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.241058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.241102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.253057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.253103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.259923] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:50.377 [2024-11-17 08:12:55.260042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68639 ] 00:12:50.377 [2024-11-17 08:12:55.265070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.265115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.277011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.277058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.293009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.293061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.308970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.309016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.320970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.321014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.336945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.336990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.352923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.352965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.368895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.368940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.377 [2024-11-17 08:12:55.384882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.377 [2024-11-17 08:12:55.384926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.400842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.400887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.416833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.416876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.428798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.428846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.440835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.440876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.443647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.637 [2024-11-17 08:12:55.452781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.452826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.464756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.464796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.476786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.476830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.488746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.488788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.500716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.500763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.512703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.512742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.524680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.524723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.536676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.536717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.548656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.548700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.560632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.560672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.572625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.572669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.584638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.584679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.587211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.637 [2024-11-17 08:12:55.596580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.596627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.608610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.608649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.624610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.624659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.637 [2024-11-17 08:12:55.636599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.637 [2024-11-17 08:12:55.636640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.648590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.648635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.660590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.660632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.672588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.672632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.684590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.684641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.696577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.696621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.708607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.708655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.720576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.720621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.732610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.732650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.744597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.744642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.756611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.756653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.768598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.768643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.780608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.780648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.792625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.792674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.804589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.804630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.816578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.816629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.828597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.828637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.840606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.840651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.848908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:50.897 [2024-11-17 08:12:55.852583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.852625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.864631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.864676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.876609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.876650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.888580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.888626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.897 [2024-11-17 08:12:55.900597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.897 [2024-11-17 08:12:55.900639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:55.912597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:55.912642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:55.924607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:55.924649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:55.936612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:55.936660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:55.948585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:55.948627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:55.960590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:55.960635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:55.972595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:55.972637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:55.984576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:55.984622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:55.996592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:55.996642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:56.008605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:56.008649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:56.020628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:56.020672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:56.032674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:56.032717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:56.044594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:56.044642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:56.056609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:56.056652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:56.068617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:56.068665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 Running I/O for 5 seconds... 00:12:51.157 [2024-11-17 08:12:56.080594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:56.080637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:56.096055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:56.096228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:56.110563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:56.110616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:56.126686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:56.126733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:56.143030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:56.143083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.157 [2024-11-17 08:12:56.159494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.157 [2024-11-17 08:12:56.159540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.416 [2024-11-17 08:12:56.175072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.416 [2024-11-17 08:12:56.175124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.416 [2024-11-17 08:12:56.184092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.416 [2024-11-17 08:12:56.184140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.416 [2024-11-17 08:12:56.199044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.416 [2024-11-17 08:12:56.199096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.416 [2024-11-17 08:12:56.215499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.416 [2024-11-17 08:12:56.215543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.416 [2024-11-17 08:12:56.230843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.416 [2024-11-17 08:12:56.230895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.416 [2024-11-17 08:12:56.249126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.416 [2024-11-17 08:12:56.249177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.416 [2024-11-17 08:12:56.263278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.416 [2024-11-17 08:12:56.263332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.416 [2024-11-17 08:12:56.279684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.416 [2024-11-17 08:12:56.279728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.416 [2024-11-17 08:12:56.297629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.416 [2024-11-17 08:12:56.297680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.416 [2024-11-17 08:12:56.312842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.416 [2024-11-17 08:12:56.313028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.416 [2024-11-17 08:12:56.331572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.416 [2024-11-17 08:12:56.331620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.417 [2024-11-17 08:12:56.345633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.417 [2024-11-17 08:12:56.345678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.417 [2024-11-17 08:12:56.361533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.417 [2024-11-17 08:12:56.361581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.417 [2024-11-17 08:12:56.377855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.417 [2024-11-17 08:12:56.377902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.417 [2024-11-17 08:12:56.394373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.417 [2024-11-17 08:12:56.394424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.417 [2024-11-17 08:12:56.411502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.417 [2024-11-17 08:12:56.411546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.417 [2024-11-17 08:12:56.427023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.427219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.445929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.446121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.459703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.459754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.476384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.476431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.492459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.492506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.508560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.508606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.526976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.527026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.542988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.543035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.560651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.560703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.576736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.576782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.593855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.593906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.610147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.610192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.626079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.626128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.642199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.642245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.660391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.660604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.676 [2024-11-17 08:12:56.675838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.676 [2024-11-17 08:12:56.676004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.693347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.693399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.709544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.709589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.726522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.726570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.741542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.741587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.757294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.757478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.773265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.773429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.791381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.791433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.806694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.806741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.818252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.818324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.834786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.834831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.850781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.850831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.867432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.867489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.884641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.884691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.900913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.900959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.918725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.918777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.935 [2024-11-17 08:12:56.934130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.935 [2024-11-17 08:12:56.934176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.194 [2024-11-17 08:12:56.952588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.194 [2024-11-17 08:12:56.952638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.194 [2024-11-17 08:12:56.967923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.194 [2024-11-17 08:12:56.968106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.194 [2024-11-17 08:12:56.985465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.194 [2024-11-17 08:12:56.985618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.194 [2024-11-17 08:12:57.000759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.194 [2024-11-17 08:12:57.000912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.194 [2024-11-17 08:12:57.009984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.194 [2024-11-17 08:12:57.010172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.194 [2024-11-17 08:12:57.024944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.194 [2024-11-17 08:12:57.025097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.195 [2024-11-17 08:12:57.040787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.195 [2024-11-17 08:12:57.040947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.195 [2024-11-17 08:12:57.058891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.195 [2024-11-17 08:12:57.059049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.195 [2024-11-17 08:12:57.073972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.195 [2024-11-17 08:12:57.074146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.195 12896.00 IOPS, 100.75 MiB/s [2024-11-17T08:12:57.207Z] [2024-11-17 08:12:57.085669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.195 [2024-11-17 08:12:57.085832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.195 [2024-11-17 08:12:57.102131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.195 [2024-11-17 08:12:57.102290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.195 [2024-11-17 08:12:57.118799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.195 [2024-11-17 08:12:57.118956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.195 [2024-11-17 08:12:57.135289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.195 [2024-11-17 08:12:57.135473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.195 [2024-11-17 08:12:57.151711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.195 [2024-11-17 08:12:57.151864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.195 [2024-11-17 08:12:57.169365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.195 [2024-11-17 08:12:57.169572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.195 [2024-11-17 08:12:57.184918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.195 [2024-11-17 08:12:57.185071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.195 [2024-11-17 08:12:57.202790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.195 [2024-11-17 08:12:57.202956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.219510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.219681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.236028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.236187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.252629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.252780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.270589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.270751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.285830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.286017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.303401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.303609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.319126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.319281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.337254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.337319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.352313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.352362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.363340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.363394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.379306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.379352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.395432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.395495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.411892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.411938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.428421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.428479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.444220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.444268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.454 [2024-11-17 08:12:57.462232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.454 [2024-11-17 08:12:57.462418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.475813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.713 [2024-11-17 08:12:57.475860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.492428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.713 [2024-11-17 08:12:57.492487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.507556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.713 [2024-11-17 08:12:57.507600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.516480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.713 [2024-11-17 08:12:57.516524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.532642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.713 [2024-11-17 08:12:57.532688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.551467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.713 [2024-11-17 08:12:57.551512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.565571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.713 [2024-11-17 08:12:57.565616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.580796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.713 [2024-11-17 08:12:57.580966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.598694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.713 [2024-11-17 08:12:57.598740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.612585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.713 [2024-11-17 08:12:57.612631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.628865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.713 [2024-11-17 08:12:57.628913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.644810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.713 [2024-11-17 08:12:57.644856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.713 [2024-11-17 08:12:57.661245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.714 [2024-11-17 08:12:57.661292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.714 [2024-11-17 08:12:57.678031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.714 [2024-11-17 08:12:57.678077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.714 [2024-11-17 08:12:57.694540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.714 [2024-11-17 08:12:57.694585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.714 [2024-11-17 08:12:57.712142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.714 [2024-11-17 08:12:57.712329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.726794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.726960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.735925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.735974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.752743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.752789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.769773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.769941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.785184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.785351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.802368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.802415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.818806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.818853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.835428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.835486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.851378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.851582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.870001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.870058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.884178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.884226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.893618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.893663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.908634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.908679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.923328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.923542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.940066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.940114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.957376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.957428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.973 [2024-11-17 08:12:57.973043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.973 [2024-11-17 08:12:57.973092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:57.991327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:57.991532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.005809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.005856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.017574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.017620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.033405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.033472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.049324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.049371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.067686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.067732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 12903.50 IOPS, 100.81 MiB/s [2024-11-17T08:12:58.245Z] [2024-11-17 08:12:58.081572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.081619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.098601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.098647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.114776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.114821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.132982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.133150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.147225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.147273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.162421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.162482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.180675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.180732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.194558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.194602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.210664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.210710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.233 [2024-11-17 08:12:58.227129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.233 [2024-11-17 08:12:58.227176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.492 [2024-11-17 08:12:58.245136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.492 [2024-11-17 08:12:58.245184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.492 [2024-11-17 08:12:58.260128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.492 [2024-11-17 08:12:58.260298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.492 [2024-11-17 08:12:58.271318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.492 [2024-11-17 08:12:58.271531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.492 [2024-11-17 08:12:58.287105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.492 [2024-11-17 08:12:58.287290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.492 [2024-11-17 08:12:58.303529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.492 [2024-11-17 08:12:58.303573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.492 [2024-11-17 08:12:58.319911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.492 [2024-11-17 08:12:58.319959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.492 [2024-11-17 08:12:58.336931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.492 [2024-11-17 08:12:58.336980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.493 [2024-11-17 08:12:58.352342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.493 [2024-11-17 08:12:58.352388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.493 [2024-11-17 08:12:58.370265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.493 [2024-11-17 08:12:58.370434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.493 [2024-11-17 08:12:58.384064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.493 [2024-11-17 08:12:58.384113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.493 [2024-11-17 08:12:58.399590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.493 [2024-11-17 08:12:58.399635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.493 [2024-11-17 08:12:58.417576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.493 [2024-11-17 08:12:58.417621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.493 [2024-11-17 08:12:58.431776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.493 [2024-11-17 08:12:58.431824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.493 [2024-11-17 08:12:58.447256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.493 [2024-11-17 08:12:58.447303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.493 [2024-11-17 08:12:58.465338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.493 [2024-11-17 08:12:58.465551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.493 [2024-11-17 08:12:58.479566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.493 [2024-11-17 08:12:58.479611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.493 [2024-11-17 08:12:58.495101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.493 [2024-11-17 08:12:58.495148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.513526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.513570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.528436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.528504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.545380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.545429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.560406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.560615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.572325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.572536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.589121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.589168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.606185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.606232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.622216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.622263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.639035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.639081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.654428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.654489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.665459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.665505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.681298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.681345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.698310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.698356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.714671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.714717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.732591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.732635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.745610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.745655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.752 [2024-11-17 08:12:58.761289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.752 [2024-11-17 08:12:58.761337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.777480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.777521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.794558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.794603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.811817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.812014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.827148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.827334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.838154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.838320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.854322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.854369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.870319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.870367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.887434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.887494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.904762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.904957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.920528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.920573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.937493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.937539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.953898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.953944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.971533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.971579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:58.986870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:58.987053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:59.005456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:59.005501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.012 [2024-11-17 08:12:59.019764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.012 [2024-11-17 08:12:59.019812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.035928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.035974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.051606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.051653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.070358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.070424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 12940.67 IOPS, 101.10 MiB/s [2024-11-17T08:12:59.283Z] [2024-11-17 08:12:59.084438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.084495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.100573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.100617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.118132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.118315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.133006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.133174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.144302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.144477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.160379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.160573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.176794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.176841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.193779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.193826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.210669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.210715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.226819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.226865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.245726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.245771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.259599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.259645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.271 [2024-11-17 08:12:59.270901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.271 [2024-11-17 08:12:59.271069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.286944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.286991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.302379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.302584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.320525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.320570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.335731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.335777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.347157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.347205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.363080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.363128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.377856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.378033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.394084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.394131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.409381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.409599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.418465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.418642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.433957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.434162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.451215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.451392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.465403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.465595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.481495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.481651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.498208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.498382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.514683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.514839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.531 [2024-11-17 08:12:59.530888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.531 [2024-11-17 08:12:59.531064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.547366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.547552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.564649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.564802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.580949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.581103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.599152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.599307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.613554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.613741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.622990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.623143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.638437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.638603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.649766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.649920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.665575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.665727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.681985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.682166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.698134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.698291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.717054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.717209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.731317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.731519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.742425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.742594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.758339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.758534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.774833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.775002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.790 [2024-11-17 08:12:59.791523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.790 [2024-11-17 08:12:59.791691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.807123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.807286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.825007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.825174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.839374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.839578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.854988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.855164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.872393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.872587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.888751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.888922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.904991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.905164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.921372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.921555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.939262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.939310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.954399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.954465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.965627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.965674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.980079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.980245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:12:59.991014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:12:59.991180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:13:00.007881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:13:00.007928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:13:00.022324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:13:00.022516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:13:00.038934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:13:00.038984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.050 [2024-11-17 08:13:00.054826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.050 [2024-11-17 08:13:00.054874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.310 [2024-11-17 08:13:00.065923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.310 [2024-11-17 08:13:00.066118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.310 12915.00 IOPS, 100.90 MiB/s [2024-11-17T08:13:00.322Z] [2024-11-17 08:13:00.082725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.310 [2024-11-17 08:13:00.082774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.310 [2024-11-17 08:13:00.098621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.310 [2024-11-17 08:13:00.098666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.310 [2024-11-17 08:13:00.116594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.310 [2024-11-17 08:13:00.116639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.310 [2024-11-17 08:13:00.131851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.310 [2024-11-17 08:13:00.132035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.310 [2024-11-17 08:13:00.150284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.310 [2024-11-17 08:13:00.150333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.311 [2024-11-17 08:13:00.165482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.311 [2024-11-17 08:13:00.165526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.311 [2024-11-17 08:13:00.183949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.311 [2024-11-17 08:13:00.183998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.311 [2024-11-17 08:13:00.197926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.311 [2024-11-17 08:13:00.197974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.311 [2024-11-17 08:13:00.213364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.311 [2024-11-17 08:13:00.213412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.311 [2024-11-17 08:13:00.230862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.311 [2024-11-17 08:13:00.231034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.311 [2024-11-17 08:13:00.246501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.311 [2024-11-17 08:13:00.246544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.311 [2024-11-17 08:13:00.265202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.311 [2024-11-17 08:13:00.265248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.311 [2024-11-17 08:13:00.279385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.311 [2024-11-17 08:13:00.279434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.311 [2024-11-17 08:13:00.294920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.311 [2024-11-17 08:13:00.294969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.311 [2024-11-17 08:13:00.312153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.311 [2024-11-17 08:13:00.312202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.328507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.328545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.345913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.346106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.362677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.362724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.378588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.378634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.394835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.394884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.412044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.412090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.427475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.427519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.445684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.445729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.460631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.460676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.472251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.472418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.488006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.488173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.504738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.504785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.520715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.520761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.538940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.539134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.553267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.553315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.570 [2024-11-17 08:13:00.570023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.570 [2024-11-17 08:13:00.570224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.586524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.586569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.602993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.603041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.621166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.621363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.635358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.635406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.651285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.651333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.667601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.667648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.684361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.684408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.700786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.700977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.717125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.717173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.734583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.734629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.750167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.750213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.767324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.767371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.783648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.783692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.801319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.801366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.816532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.816577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.830 [2024-11-17 08:13:00.834903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.830 [2024-11-17 08:13:00.835073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:00.848713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:00.848762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:00.863976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:00.864023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:00.880761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:00.880807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:00.898364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:00.898566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:00.912144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:00.912192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:00.927620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:00.927665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:00.946039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:00.946222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:00.960258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:00.960308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:00.975556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:00.975600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:00.993778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:00.993826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:01.008011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:01.008059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:01.024016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:01.024063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:01.039216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:01.039266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:01.050123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:01.050168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 [2024-11-17 08:13:01.065674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:01.065721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 12927.00 IOPS, 100.99 MiB/s [2024-11-17T08:13:01.102Z] [2024-11-17 08:13:01.081347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:01.081546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.090 00:12:56.090 Latency(us) 00:12:56.090 [2024-11-17T08:13:01.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.090 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:56.090 Nvme1n1 : 5.01 12926.30 100.99 0.00 0.00 9890.60 3553.16 16739.32 00:12:56.090 [2024-11-17T08:13:01.102Z] =================================================================================================================== 00:12:56.090 [2024-11-17T08:13:01.102Z] Total : 12926.30 100.99 0.00 0.00 9890.60 3553.16 16739.32 00:12:56.090 [2024-11-17 08:13:01.089814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.090 [2024-11-17 08:13:01.089974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.101832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.101986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.113831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.113976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.125791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.125935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.137795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.137935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.153746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.153886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.165750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.165895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.177730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.177889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.189690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.189833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.201705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.201849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.213681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.213842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.225644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.225785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.237647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.237804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.249613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.249756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.261652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.261813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.273594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.273637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.285575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.285617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.297589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.297632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.309588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.309630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.321577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.321620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.333602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.333644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.345577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.345618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.350 [2024-11-17 08:13:01.357586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.350 [2024-11-17 08:13:01.357633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.610 [2024-11-17 08:13:01.369584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.610 [2024-11-17 08:13:01.369625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.610 [2024-11-17 08:13:01.381570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.610 [2024-11-17 08:13:01.381612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.610 [2024-11-17 08:13:01.393599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.610 [2024-11-17 08:13:01.393640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.610 [2024-11-17 08:13:01.405622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.610 [2024-11-17 08:13:01.405664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.610 [2024-11-17 08:13:01.417582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.610 [2024-11-17 08:13:01.417623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.610 [2024-11-17 08:13:01.429598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.610 [2024-11-17 08:13:01.429638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.610 [2024-11-17 08:13:01.441575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.441616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.453585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.453627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.465591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.465634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.477583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.477624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.489603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.489644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.501585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.501626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.517571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.517611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.529592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.529634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.541571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.541611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.553600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.553641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.565587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.565628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.577567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.577608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.589582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.589623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.601589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.601630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.611 [2024-11-17 08:13:01.613582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.611 [2024-11-17 08:13:01.613623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.625585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.625626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.637567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.637608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.649593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.649633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.661588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.661629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.673573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.673613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.685582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.685624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.701612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.701652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.713568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.713608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.725610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.725650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.737572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.737611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.749594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.749635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.761589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.761630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.773567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.773606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.785581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.785622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.797582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.797622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.809567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.809608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.821609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.821650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.833568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.833607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.845600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.845640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.857579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.857619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:56.871 [2024-11-17 08:13:01.869568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:56.871 [2024-11-17 08:13:01.869608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:01.881583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:01.881631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:01.893589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:01.893629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:01.905566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:01.905608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:01.917602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:01.917641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:01.929603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:01.929644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:01.941589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:01.941630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:01.953582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:01.953623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:01.965568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:01.965609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:01.977589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:01.977629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:01.989601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:01.989642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:02.001567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:02.001610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:02.013581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:02.013622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:02.025579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:02.025620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.131 [2024-11-17 08:13:02.037664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.131 [2024-11-17 08:13:02.037705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.132 [2024-11-17 08:13:02.049587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:57.132 [2024-11-17 08:13:02.049628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:57.132 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68639) - No such process 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68639 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.132 delay0 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.132 08:13:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:12:57.391 [2024-11-17 08:13:02.348770] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:03.958 Initializing NVMe Controllers 00:13:03.958 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:03.958 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:03.958 Initialization complete. Launching workers. 00:13:03.958 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 112 00:13:03.958 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 399, failed to submit 33 00:13:03.958 success 282, unsuccessful 117, failed 0 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:03.958 rmmod nvme_tcp 00:13:03.958 rmmod nvme_fabrics 00:13:03.958 rmmod nvme_keyring 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 68477 ']' 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 68477 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 68477 ']' 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 68477 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68477 00:13:03.958 killing process with pid 68477 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68477' 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 68477 00:13:03.958 08:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 68477 00:13:04.896 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:04.896 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:04.896 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:04.896 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:04.896 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:13:04.896 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:04.897 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:13:04.897 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:04.897 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:04.897 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:04.897 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:04.897 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:04.897 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:04.897 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:04.897 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:04.897 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:04.897 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:04.897 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:05.156 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:05.156 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:05.156 08:13:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:05.156 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:05.156 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:05.156 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.156 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.156 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.156 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:13:05.156 00:13:05.156 real 0m28.940s 00:13:05.156 user 0m44.712s 00:13:05.156 sys 0m10.322s 00:13:05.156 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.156 ************************************ 00:13:05.156 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:05.156 END TEST nvmf_zcopy 00:13:05.156 ************************************ 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:05.416 ************************************ 00:13:05.416 START TEST nvmf_nmic 00:13:05.416 ************************************ 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:05.416 * Looking for test storage... 00:13:05.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:05.416 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:05.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.677 --rc genhtml_branch_coverage=1 00:13:05.677 --rc genhtml_function_coverage=1 00:13:05.677 --rc genhtml_legend=1 00:13:05.677 --rc geninfo_all_blocks=1 00:13:05.677 --rc geninfo_unexecuted_blocks=1 00:13:05.677 00:13:05.677 ' 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:05.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.677 --rc genhtml_branch_coverage=1 00:13:05.677 --rc genhtml_function_coverage=1 00:13:05.677 --rc genhtml_legend=1 00:13:05.677 --rc geninfo_all_blocks=1 00:13:05.677 --rc geninfo_unexecuted_blocks=1 00:13:05.677 00:13:05.677 ' 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:05.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.677 --rc genhtml_branch_coverage=1 00:13:05.677 --rc genhtml_function_coverage=1 00:13:05.677 --rc genhtml_legend=1 00:13:05.677 --rc geninfo_all_blocks=1 00:13:05.677 --rc geninfo_unexecuted_blocks=1 00:13:05.677 00:13:05.677 ' 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:05.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.677 --rc genhtml_branch_coverage=1 00:13:05.677 --rc genhtml_function_coverage=1 00:13:05.677 --rc genhtml_legend=1 00:13:05.677 --rc geninfo_all_blocks=1 00:13:05.677 --rc geninfo_unexecuted_blocks=1 00:13:05.677 00:13:05.677 ' 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:05.677 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:05.677 Cannot find device "nvmf_init_br" 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:05.677 Cannot find device "nvmf_init_br2" 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:05.677 Cannot find device "nvmf_tgt_br" 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:05.677 Cannot find device "nvmf_tgt_br2" 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:05.677 Cannot find device "nvmf_init_br" 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:05.677 Cannot find device "nvmf_init_br2" 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:05.677 Cannot find device "nvmf_tgt_br" 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:05.677 Cannot find device "nvmf_tgt_br2" 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:05.677 Cannot find device "nvmf_br" 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:05.677 Cannot find device "nvmf_init_if" 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:13:05.677 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:05.935 Cannot find device "nvmf_init_if2" 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:05.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:05.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:05.935 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:05.936 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:06.195 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:06.195 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:06.195 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:06.195 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:06.195 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:06.195 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:06.195 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:06.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:06.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:13:06.195 00:13:06.195 --- 10.0.0.3 ping statistics --- 00:13:06.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.195 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:06.195 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:06.195 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:06.195 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:13:06.195 00:13:06.195 --- 10.0.0.4 ping statistics --- 00:13:06.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.195 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:06.195 08:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:06.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:13:06.195 00:13:06.195 --- 10.0.0.1 ping statistics --- 00:13:06.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.195 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:06.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:13:06.195 00:13:06.195 --- 10.0.0.2 ping statistics --- 00:13:06.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.195 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=69046 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 69046 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 69046 ']' 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.195 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:06.195 [2024-11-17 08:13:11.173538] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:06.195 [2024-11-17 08:13:11.174338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.456 [2024-11-17 08:13:11.362428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.715 [2024-11-17 08:13:11.506495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.715 [2024-11-17 08:13:11.506547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.715 [2024-11-17 08:13:11.506564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.715 [2024-11-17 08:13:11.506576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.715 [2024-11-17 08:13:11.506589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.715 [2024-11-17 08:13:11.509510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.715 [2024-11-17 08:13:11.509706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.715 [2024-11-17 08:13:11.509921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.715 [2024-11-17 08:13:11.510104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.975 [2024-11-17 08:13:11.772787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:07.235 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.235 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:13:07.235 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:07.235 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:07.235 08:13:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.235 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.236 [2024-11-17 08:13:12.046806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.236 Malloc0 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.236 [2024-11-17 08:13:12.204211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:07.236 test case1: single bdev can't be used in multiple subsystems 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.236 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.236 [2024-11-17 08:13:12.239983] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:07.236 [2024-11-17 08:13:12.240200] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:07.236 [2024-11-17 08:13:12.240317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.495 request: 00:13:07.495 { 00:13:07.495 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:07.495 "namespace": { 00:13:07.495 "bdev_name": "Malloc0", 00:13:07.495 "no_auto_visible": false 00:13:07.495 }, 00:13:07.495 "method": "nvmf_subsystem_add_ns", 00:13:07.495 "req_id": 1 00:13:07.495 } 00:13:07.495 Got JSON-RPC error response 00:13:07.495 response: 00:13:07.495 { 00:13:07.495 "code": -32602, 00:13:07.495 "message": "Invalid parameters" 00:13:07.495 } 00:13:07.495 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:07.495 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:07.495 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:07.495 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:07.495 Adding namespace failed - expected result. 00:13:07.495 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:07.495 test case2: host connect to nvmf target in multiple paths 00:13:07.495 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:13:07.495 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.495 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:07.495 [2024-11-17 08:13:12.260110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:13:07.495 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.495 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:07.496 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:13:07.755 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.755 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:13:07.755 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.755 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:07.755 08:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:13:09.661 08:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:09.661 08:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:09.661 08:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.661 08:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:09.661 08:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.661 08:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:13:09.661 08:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:09.661 [global] 00:13:09.661 thread=1 00:13:09.661 invalidate=1 00:13:09.661 rw=write 00:13:09.661 time_based=1 00:13:09.661 runtime=1 00:13:09.661 ioengine=libaio 00:13:09.661 direct=1 00:13:09.661 bs=4096 00:13:09.661 iodepth=1 00:13:09.661 norandommap=0 00:13:09.661 numjobs=1 00:13:09.661 00:13:09.661 verify_dump=1 00:13:09.661 verify_backlog=512 00:13:09.661 verify_state_save=0 00:13:09.661 do_verify=1 00:13:09.661 verify=crc32c-intel 00:13:09.661 [job0] 00:13:09.661 filename=/dev/nvme0n1 00:13:09.661 Could not set queue depth (nvme0n1) 00:13:09.921 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:09.921 fio-3.35 00:13:09.921 Starting 1 thread 00:13:11.301 00:13:11.301 job0: (groupid=0, jobs=1): err= 0: pid=69137: Sun Nov 17 08:13:15 2024 00:13:11.301 read: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec) 00:13:11.301 slat (nsec): min=8364, max=25890, avg=9350.40, stdev=1337.90 00:13:11.301 clat (usec): min=132, max=571, avg=211.58, stdev=40.21 00:13:11.301 lat (usec): min=142, max=580, avg=220.93, stdev=40.27 00:13:11.301 clat percentiles (usec): 00:13:11.301 | 1.00th=[ 139], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 176], 00:13:11.301 | 30.00th=[ 188], 40.00th=[ 198], 50.00th=[ 208], 60.00th=[ 219], 00:13:11.301 | 70.00th=[ 231], 80.00th=[ 245], 90.00th=[ 265], 95.00th=[ 281], 00:13:11.301 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 359], 99.95th=[ 371], 00:13:11.301 | 99.99th=[ 570] 00:13:11.301 write: IOPS=2937, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec); 0 zone resets 00:13:11.301 slat (usec): min=12, max=106, avg=15.47, stdev= 5.31 00:13:11.301 clat (usec): min=79, max=284, avg=130.50, stdev=26.73 00:13:11.301 lat (usec): min=93, max=391, avg=145.96, stdev=27.67 00:13:11.301 clat percentiles (usec): 00:13:11.301 | 1.00th=[ 85], 5.00th=[ 95], 10.00th=[ 99], 20.00th=[ 106], 00:13:11.301 | 30.00th=[ 113], 40.00th=[ 121], 50.00th=[ 129], 60.00th=[ 137], 00:13:11.301 | 70.00th=[ 143], 80.00th=[ 153], 90.00th=[ 167], 95.00th=[ 178], 00:13:11.301 | 99.00th=[ 202], 99.50th=[ 210], 99.90th=[ 243], 99.95th=[ 277], 00:13:11.301 | 99.99th=[ 285] 00:13:11.301 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:13:11.301 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:11.301 lat (usec) : 100=6.22%, 250=85.89%, 500=7.87%, 750=0.02% 00:13:11.301 cpu : usr=0.80%, sys=5.70%, ctx=5500, majf=0, minf=5 00:13:11.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:11.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.301 issued rwts: total=2560,2940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:11.301 00:13:11.301 Run status group 0 (all jobs): 00:13:11.301 READ: bw=10.0MiB/s (10.5MB/s), 10.0MiB/s-10.0MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1000-1000msec 00:13:11.301 WRITE: bw=11.5MiB/s (12.0MB/s), 11.5MiB/s-11.5MiB/s (12.0MB/s-12.0MB/s), io=11.5MiB (12.0MB), run=1001-1001msec 00:13:11.301 00:13:11.301 Disk stats (read/write): 00:13:11.301 nvme0n1: ios=2403/2560, merge=0/0, ticks=536/358, in_queue=894, util=91.37% 00:13:11.301 08:13:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.301 rmmod nvme_tcp 00:13:11.301 rmmod nvme_fabrics 00:13:11.301 rmmod nvme_keyring 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 69046 ']' 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 69046 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 69046 ']' 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 69046 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69046 00:13:11.301 killing process with pid 69046 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69046' 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 69046 00:13:11.301 08:13:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 69046 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:12.782 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.041 08:13:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.041 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:13:13.041 00:13:13.041 real 0m7.837s 00:13:13.041 user 0m22.772s 00:13:13.041 sys 0m2.636s 00:13:13.041 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.041 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:13.041 ************************************ 00:13:13.041 END TEST nvmf_nmic 00:13:13.041 ************************************ 00:13:13.301 08:13:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:13.301 08:13:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:13.301 08:13:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.301 08:13:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:13.301 ************************************ 00:13:13.301 START TEST nvmf_fio_target 00:13:13.301 ************************************ 00:13:13.301 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:13.301 * Looking for test storage... 00:13:13.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:13.301 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:13.301 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:13:13.301 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:13.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.562 --rc genhtml_branch_coverage=1 00:13:13.562 --rc genhtml_function_coverage=1 00:13:13.562 --rc genhtml_legend=1 00:13:13.562 --rc geninfo_all_blocks=1 00:13:13.562 --rc geninfo_unexecuted_blocks=1 00:13:13.562 00:13:13.562 ' 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:13.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.562 --rc genhtml_branch_coverage=1 00:13:13.562 --rc genhtml_function_coverage=1 00:13:13.562 --rc genhtml_legend=1 00:13:13.562 --rc geninfo_all_blocks=1 00:13:13.562 --rc geninfo_unexecuted_blocks=1 00:13:13.562 00:13:13.562 ' 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:13.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.562 --rc genhtml_branch_coverage=1 00:13:13.562 --rc genhtml_function_coverage=1 00:13:13.562 --rc genhtml_legend=1 00:13:13.562 --rc geninfo_all_blocks=1 00:13:13.562 --rc geninfo_unexecuted_blocks=1 00:13:13.562 00:13:13.562 ' 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:13.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.562 --rc genhtml_branch_coverage=1 00:13:13.562 --rc genhtml_function_coverage=1 00:13:13.562 --rc genhtml_legend=1 00:13:13.562 --rc geninfo_all_blocks=1 00:13:13.562 --rc geninfo_unexecuted_blocks=1 00:13:13.562 00:13:13.562 ' 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.562 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.563 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:13.563 Cannot find device "nvmf_init_br" 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:13.563 Cannot find device "nvmf_init_br2" 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:13.563 Cannot find device "nvmf_tgt_br" 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:13.563 Cannot find device "nvmf_tgt_br2" 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:13.563 Cannot find device "nvmf_init_br" 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:13.563 Cannot find device "nvmf_init_br2" 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:13.563 Cannot find device "nvmf_tgt_br" 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:13.563 Cannot find device "nvmf_tgt_br2" 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:13:13.563 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:13.563 Cannot find device "nvmf_br" 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:13.824 Cannot find device "nvmf_init_if" 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:13.824 Cannot find device "nvmf_init_if2" 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:13.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:13.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:13.824 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:14.085 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:14.085 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:13:14.085 00:13:14.085 --- 10.0.0.3 ping statistics --- 00:13:14.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.085 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:14.085 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:14.085 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:13:14.085 00:13:14.085 --- 10.0.0.4 ping statistics --- 00:13:14.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.085 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:14.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:13:14.085 00:13:14.085 --- 10.0.0.1 ping statistics --- 00:13:14.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.085 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:14.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:13:14.085 00:13:14.085 --- 10.0.0.2 ping statistics --- 00:13:14.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.085 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:14.085 08:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:14.085 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:14.085 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:14.085 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.085 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=69389 00:13:14.085 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.085 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 69389 00:13:14.085 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 69389 ']' 00:13:14.085 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.085 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.085 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.085 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.085 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.344 [2024-11-17 08:13:19.131525] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:14.344 [2024-11-17 08:13:19.131641] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.344 [2024-11-17 08:13:19.318879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.604 [2024-11-17 08:13:19.459635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.604 [2024-11-17 08:13:19.459695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.604 [2024-11-17 08:13:19.459720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.604 [2024-11-17 08:13:19.459737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.604 [2024-11-17 08:13:19.459757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.604 [2024-11-17 08:13:19.462519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.604 [2024-11-17 08:13:19.462673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.604 [2024-11-17 08:13:19.462890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.604 [2024-11-17 08:13:19.463253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.863 [2024-11-17 08:13:19.715411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:15.123 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.123 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:13:15.123 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.123 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.123 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.123 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.123 08:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:15.382 [2024-11-17 08:13:20.223604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.382 08:13:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:15.641 08:13:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:15.641 08:13:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:15.901 08:13:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:15.901 08:13:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:16.161 08:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:16.161 08:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:16.731 08:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:16.731 08:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:16.731 08:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:16.990 08:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:16.990 08:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.249 08:13:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:17.249 08:13:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.508 08:13:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:17.508 08:13:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:17.768 08:13:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:18.027 08:13:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:18.027 08:13:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:18.286 08:13:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:18.286 08:13:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.286 08:13:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:18.545 [2024-11-17 08:13:23.482973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:18.545 08:13:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:18.805 08:13:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:19.064 08:13:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:19.064 08:13:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:19.064 08:13:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:13:19.064 08:13:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.064 08:13:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:13:19.064 08:13:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:13:19.064 08:13:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:13:21.601 08:13:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:21.601 08:13:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:21.601 08:13:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.601 08:13:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:13:21.601 08:13:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.601 08:13:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:13:21.601 08:13:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:21.601 [global] 00:13:21.601 thread=1 00:13:21.601 invalidate=1 00:13:21.601 rw=write 00:13:21.601 time_based=1 00:13:21.601 runtime=1 00:13:21.601 ioengine=libaio 00:13:21.601 direct=1 00:13:21.601 bs=4096 00:13:21.601 iodepth=1 00:13:21.601 norandommap=0 00:13:21.601 numjobs=1 00:13:21.601 00:13:21.601 verify_dump=1 00:13:21.601 verify_backlog=512 00:13:21.601 verify_state_save=0 00:13:21.601 do_verify=1 00:13:21.601 verify=crc32c-intel 00:13:21.601 [job0] 00:13:21.601 filename=/dev/nvme0n1 00:13:21.601 [job1] 00:13:21.601 filename=/dev/nvme0n2 00:13:21.601 [job2] 00:13:21.601 filename=/dev/nvme0n3 00:13:21.601 [job3] 00:13:21.601 filename=/dev/nvme0n4 00:13:21.601 Could not set queue depth (nvme0n1) 00:13:21.601 Could not set queue depth (nvme0n2) 00:13:21.601 Could not set queue depth (nvme0n3) 00:13:21.601 Could not set queue depth (nvme0n4) 00:13:21.601 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.601 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.601 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.601 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.601 fio-3.35 00:13:21.601 Starting 4 threads 00:13:22.539 00:13:22.539 job0: (groupid=0, jobs=1): err= 0: pid=69573: Sun Nov 17 08:13:27 2024 00:13:22.539 read: IOPS=1034, BW=4140KiB/s (4239kB/s)(4144KiB/1001msec) 00:13:22.539 slat (nsec): min=16556, max=79954, avg=28213.73, stdev=5726.90 00:13:22.539 clat (usec): min=208, max=4501, avg=392.80, stdev=153.72 00:13:22.539 lat (usec): min=226, max=4527, avg=421.02, stdev=154.26 00:13:22.539 clat percentiles (usec): 00:13:22.539 | 1.00th=[ 245], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 302], 00:13:22.539 | 30.00th=[ 334], 40.00th=[ 392], 50.00th=[ 412], 60.00th=[ 429], 00:13:22.539 | 70.00th=[ 441], 80.00th=[ 453], 90.00th=[ 469], 95.00th=[ 482], 00:13:22.539 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 1500], 99.95th=[ 4490], 00:13:22.539 | 99.99th=[ 4490] 00:13:22.539 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:22.539 slat (nsec): min=16894, max=97710, avg=43407.08, stdev=7748.70 00:13:22.539 clat (usec): min=146, max=418, avg=318.43, stdev=51.12 00:13:22.539 lat (usec): min=167, max=499, avg=361.84, stdev=52.88 00:13:22.539 clat percentiles (usec): 00:13:22.539 | 1.00th=[ 172], 5.00th=[ 204], 10.00th=[ 237], 20.00th=[ 289], 00:13:22.539 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:13:22.539 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 383], 00:13:22.539 | 99.00th=[ 404], 99.50th=[ 408], 99.90th=[ 420], 99.95th=[ 420], 00:13:22.539 | 99.99th=[ 420] 00:13:22.539 bw ( KiB/s): min= 5800, max= 5800, per=22.98%, avg=5800.00, stdev= 0.00, samples=1 00:13:22.539 iops : min= 1450, max= 1450, avg=1450.00, stdev= 0.00, samples=1 00:13:22.539 lat (usec) : 250=8.01%, 500=91.37%, 750=0.51% 00:13:22.539 lat (msec) : 2=0.08%, 10=0.04% 00:13:22.539 cpu : usr=2.10%, sys=7.60%, ctx=2593, majf=0, minf=19 00:13:22.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.539 issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:22.539 job1: (groupid=0, jobs=1): err= 0: pid=69574: Sun Nov 17 08:13:27 2024 00:13:22.539 read: IOPS=1033, BW=4136KiB/s (4235kB/s)(4140KiB/1001msec) 00:13:22.539 slat (nsec): min=20485, max=73464, avg=28904.29, stdev=4830.08 00:13:22.539 clat (usec): min=219, max=1467, avg=391.36, stdev=84.35 00:13:22.539 lat (usec): min=248, max=1496, avg=420.26, stdev=84.86 00:13:22.539 clat percentiles (usec): 00:13:22.539 | 1.00th=[ 239], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 310], 00:13:22.539 | 30.00th=[ 338], 40.00th=[ 388], 50.00th=[ 412], 60.00th=[ 429], 00:13:22.539 | 70.00th=[ 441], 80.00th=[ 453], 90.00th=[ 469], 95.00th=[ 482], 00:13:22.539 | 99.00th=[ 502], 99.50th=[ 515], 99.90th=[ 1385], 99.95th=[ 1467], 00:13:22.539 | 99.99th=[ 1467] 00:13:22.539 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:22.539 slat (usec): min=30, max=130, avg=43.63, stdev= 6.51 00:13:22.539 clat (usec): min=146, max=418, avg=318.94, stdev=50.60 00:13:22.539 lat (usec): min=178, max=524, avg=362.57, stdev=51.51 00:13:22.539 clat percentiles (usec): 00:13:22.539 | 1.00th=[ 176], 5.00th=[ 206], 10.00th=[ 239], 20.00th=[ 289], 00:13:22.539 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:13:22.539 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 383], 00:13:22.539 | 99.00th=[ 404], 99.50th=[ 412], 99.90th=[ 416], 99.95th=[ 420], 00:13:22.539 | 99.99th=[ 420] 00:13:22.539 bw ( KiB/s): min= 5912, max= 5912, per=23.43%, avg=5912.00, stdev= 0.00, samples=1 00:13:22.539 iops : min= 1478, max= 1478, avg=1478.00, stdev= 0.00, samples=1 00:13:22.539 lat (usec) : 250=7.86%, 500=91.56%, 750=0.51% 00:13:22.539 lat (msec) : 2=0.08% 00:13:22.539 cpu : usr=2.10%, sys=7.80%, ctx=2572, majf=0, minf=7 00:13:22.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.540 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:22.540 job2: (groupid=0, jobs=1): err= 0: pid=69575: Sun Nov 17 08:13:27 2024 00:13:22.540 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:22.540 slat (nsec): min=13658, max=53504, avg=20701.74, stdev=4769.51 00:13:22.540 clat (usec): min=233, max=516, avg=312.83, stdev=37.51 00:13:22.540 lat (usec): min=247, max=546, avg=333.53, stdev=38.61 00:13:22.540 clat percentiles (usec): 00:13:22.540 | 1.00th=[ 251], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 285], 00:13:22.540 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:13:22.540 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 388], 00:13:22.540 | 99.00th=[ 437], 99.50th=[ 449], 99.90th=[ 510], 99.95th=[ 519], 00:13:22.540 | 99.99th=[ 519] 00:13:22.540 write: IOPS=1566, BW=6266KiB/s (6416kB/s)(6272KiB/1001msec); 0 zone resets 00:13:22.540 slat (usec): min=25, max=498, avg=41.09, stdev=17.38 00:13:22.540 clat (usec): min=121, max=5986, avg=264.51, stdev=158.62 00:13:22.540 lat (usec): min=154, max=6027, avg=305.60, stdev=160.14 00:13:22.540 clat percentiles (usec): 00:13:22.540 | 1.00th=[ 159], 5.00th=[ 202], 10.00th=[ 217], 20.00th=[ 233], 00:13:22.540 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 260], 00:13:22.540 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 334], 00:13:22.540 | 99.00th=[ 429], 99.50th=[ 490], 99.90th=[ 1909], 99.95th=[ 5997], 00:13:22.540 | 99.99th=[ 5997] 00:13:22.540 bw ( KiB/s): min= 8192, max= 8192, per=32.46%, avg=8192.00, stdev= 0.00, samples=1 00:13:22.540 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:22.540 lat (usec) : 250=23.81%, 500=75.90%, 750=0.19% 00:13:22.540 lat (msec) : 2=0.06%, 10=0.03% 00:13:22.540 cpu : usr=1.90%, sys=8.00%, ctx=3104, majf=0, minf=9 00:13:22.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.540 issued rwts: total=1536,1568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:22.540 job3: (groupid=0, jobs=1): err= 0: pid=69576: Sun Nov 17 08:13:27 2024 00:13:22.540 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:22.540 slat (nsec): min=16212, max=70813, avg=24070.44, stdev=5088.46 00:13:22.540 clat (usec): min=216, max=797, avg=305.99, stdev=37.65 00:13:22.540 lat (usec): min=237, max=823, avg=330.06, stdev=38.06 00:13:22.540 clat percentiles (usec): 00:13:22.540 | 1.00th=[ 243], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 281], 00:13:22.540 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 310], 00:13:22.540 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 371], 00:13:22.540 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 725], 99.95th=[ 799], 00:13:22.540 | 99.99th=[ 799] 00:13:22.540 write: IOPS=1673, BW=6693KiB/s (6854kB/s)(6700KiB/1001msec); 0 zone resets 00:13:22.540 slat (usec): min=15, max=202, avg=39.81, stdev= 8.76 00:13:22.540 clat (usec): min=85, max=5642, avg=248.91, stdev=139.74 00:13:22.540 lat (usec): min=110, max=5682, avg=288.72, stdev=140.58 00:13:22.540 clat percentiles (usec): 00:13:22.540 | 1.00th=[ 133], 5.00th=[ 161], 10.00th=[ 190], 20.00th=[ 217], 00:13:22.540 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:13:22.540 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 314], 00:13:22.540 | 99.00th=[ 363], 99.50th=[ 429], 99.90th=[ 586], 99.95th=[ 5669], 00:13:22.540 | 99.99th=[ 5669] 00:13:22.540 bw ( KiB/s): min= 8192, max= 8192, per=32.46%, avg=8192.00, stdev= 0.00, samples=1 00:13:22.540 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:22.540 lat (usec) : 100=0.09%, 250=28.81%, 500=70.91%, 750=0.12%, 1000=0.03% 00:13:22.540 lat (msec) : 10=0.03% 00:13:22.540 cpu : usr=2.10%, sys=8.70%, ctx=3213, majf=0, minf=14 00:13:22.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.540 issued rwts: total=1536,1675,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:22.540 00:13:22.540 Run status group 0 (all jobs): 00:13:22.540 READ: bw=20.1MiB/s (21.0MB/s), 4136KiB/s-6138KiB/s (4235kB/s-6285kB/s), io=20.1MiB (21.1MB), run=1001-1001msec 00:13:22.540 WRITE: bw=24.6MiB/s (25.8MB/s), 6138KiB/s-6693KiB/s (6285kB/s-6854kB/s), io=24.7MiB (25.9MB), run=1001-1001msec 00:13:22.540 00:13:22.540 Disk stats (read/write): 00:13:22.540 nvme0n1: ios=1074/1183, merge=0/0, ticks=470/388, in_queue=858, util=90.37% 00:13:22.540 nvme0n2: ios=1068/1183, merge=0/0, ticks=427/384, in_queue=811, util=89.25% 00:13:22.540 nvme0n3: ios=1180/1536, merge=0/0, ticks=372/427, in_queue=799, util=89.18% 00:13:22.540 nvme0n4: ios=1247/1536, merge=0/0, ticks=436/412, in_queue=848, util=90.56% 00:13:22.540 08:13:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:22.540 [global] 00:13:22.540 thread=1 00:13:22.540 invalidate=1 00:13:22.540 rw=randwrite 00:13:22.540 time_based=1 00:13:22.540 runtime=1 00:13:22.540 ioengine=libaio 00:13:22.540 direct=1 00:13:22.540 bs=4096 00:13:22.540 iodepth=1 00:13:22.540 norandommap=0 00:13:22.540 numjobs=1 00:13:22.540 00:13:22.800 verify_dump=1 00:13:22.800 verify_backlog=512 00:13:22.800 verify_state_save=0 00:13:22.800 do_verify=1 00:13:22.800 verify=crc32c-intel 00:13:22.800 [job0] 00:13:22.800 filename=/dev/nvme0n1 00:13:22.800 [job1] 00:13:22.800 filename=/dev/nvme0n2 00:13:22.800 [job2] 00:13:22.800 filename=/dev/nvme0n3 00:13:22.800 [job3] 00:13:22.800 filename=/dev/nvme0n4 00:13:22.800 Could not set queue depth (nvme0n1) 00:13:22.800 Could not set queue depth (nvme0n2) 00:13:22.800 Could not set queue depth (nvme0n3) 00:13:22.800 Could not set queue depth (nvme0n4) 00:13:22.800 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:22.800 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:22.800 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:22.800 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:22.800 fio-3.35 00:13:22.800 Starting 4 threads 00:13:24.178 00:13:24.178 job0: (groupid=0, jobs=1): err= 0: pid=69629: Sun Nov 17 08:13:28 2024 00:13:24.178 read: IOPS=1596, BW=6386KiB/s (6539kB/s)(6392KiB/1001msec) 00:13:24.178 slat (nsec): min=7954, max=37358, avg=10721.50, stdev=3297.48 00:13:24.178 clat (usec): min=197, max=803, avg=284.12, stdev=40.19 00:13:24.178 lat (usec): min=205, max=813, avg=294.84, stdev=40.75 00:13:24.178 clat percentiles (usec): 00:13:24.178 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 253], 00:13:24.178 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:13:24.178 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 334], 95.00th=[ 351], 00:13:24.178 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 445], 99.95th=[ 807], 00:13:24.178 | 99.99th=[ 807] 00:13:24.178 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:24.178 slat (usec): min=13, max=100, avg=30.38, stdev=11.20 00:13:24.178 clat (usec): min=114, max=1238, avg=224.85, stdev=46.92 00:13:24.178 lat (usec): min=144, max=1253, avg=255.23, stdev=51.44 00:13:24.178 clat percentiles (usec): 00:13:24.178 | 1.00th=[ 145], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 194], 00:13:24.178 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 233], 00:13:24.178 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 289], 00:13:24.178 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 433], 99.95th=[ 873], 00:13:24.178 | 99.99th=[ 1237] 00:13:24.178 bw ( KiB/s): min= 8192, max= 8192, per=28.15%, avg=8192.00, stdev= 0.00, samples=1 00:13:24.178 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:24.178 lat (usec) : 250=50.00%, 500=49.92%, 1000=0.05% 00:13:24.178 lat (msec) : 2=0.03% 00:13:24.178 cpu : usr=2.00%, sys=5.80%, ctx=3653, majf=0, minf=11 00:13:24.178 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.178 issued rwts: total=1598,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.178 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.178 job1: (groupid=0, jobs=1): err= 0: pid=69630: Sun Nov 17 08:13:28 2024 00:13:24.178 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:24.178 slat (nsec): min=8556, max=47917, avg=13580.45, stdev=5942.61 00:13:24.178 clat (usec): min=210, max=6101, avg=310.31, stdev=172.59 00:13:24.178 lat (usec): min=222, max=6125, avg=323.89, stdev=174.20 00:13:24.178 clat percentiles (usec): 00:13:24.178 | 1.00th=[ 223], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 260], 00:13:24.178 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 297], 00:13:24.178 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 412], 95.00th=[ 461], 00:13:24.178 | 99.00th=[ 529], 99.50th=[ 840], 99.90th=[ 2073], 99.95th=[ 6128], 00:13:24.178 | 99.99th=[ 6128] 00:13:24.178 write: IOPS=1679, BW=6717KiB/s (6878kB/s)(6724KiB/1001msec); 0 zone resets 00:13:24.178 slat (usec): min=13, max=103, avg=30.63, stdev=11.18 00:13:24.178 clat (usec): min=141, max=500, avg=264.76, stdev=76.67 00:13:24.178 lat (usec): min=157, max=542, avg=295.39, stdev=83.45 00:13:24.178 clat percentiles (usec): 00:13:24.178 | 1.00th=[ 149], 5.00th=[ 163], 10.00th=[ 178], 20.00th=[ 196], 00:13:24.178 | 30.00th=[ 212], 40.00th=[ 227], 50.00th=[ 245], 60.00th=[ 269], 00:13:24.178 | 70.00th=[ 306], 80.00th=[ 347], 90.00th=[ 375], 95.00th=[ 400], 00:13:24.178 | 99.00th=[ 441], 99.50th=[ 469], 99.90th=[ 498], 99.95th=[ 502], 00:13:24.178 | 99.99th=[ 502] 00:13:24.178 bw ( KiB/s): min= 7184, max= 7184, per=24.69%, avg=7184.00, stdev= 0.00, samples=1 00:13:24.178 iops : min= 1796, max= 1796, avg=1796.00, stdev= 0.00, samples=1 00:13:24.178 lat (usec) : 250=33.48%, 500=65.59%, 750=0.68%, 1000=0.12% 00:13:24.178 lat (msec) : 2=0.06%, 4=0.03%, 10=0.03% 00:13:24.178 cpu : usr=1.70%, sys=5.60%, ctx=3219, majf=0, minf=11 00:13:24.178 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.178 issued rwts: total=1536,1681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.178 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.178 job2: (groupid=0, jobs=1): err= 0: pid=69631: Sun Nov 17 08:13:28 2024 00:13:24.178 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:24.178 slat (nsec): min=7377, max=57376, avg=17821.44, stdev=8475.24 00:13:24.178 clat (usec): min=157, max=535, avg=296.23, stdev=66.20 00:13:24.178 lat (usec): min=165, max=552, avg=314.05, stdev=72.84 00:13:24.178 clat percentiles (usec): 00:13:24.178 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 223], 00:13:24.178 | 30.00th=[ 241], 40.00th=[ 281], 50.00th=[ 302], 60.00th=[ 322], 00:13:24.178 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 375], 95.00th=[ 396], 00:13:24.178 | 99.00th=[ 445], 99.50th=[ 478], 99.90th=[ 519], 99.95th=[ 537], 00:13:24.178 | 99.99th=[ 537] 00:13:24.178 write: IOPS=2014, BW=8060KiB/s (8253kB/s)(8068KiB/1001msec); 0 zone resets 00:13:24.178 slat (nsec): min=8031, max=96867, avg=21977.99, stdev=10351.76 00:13:24.178 clat (usec): min=104, max=1741, avg=231.12, stdev=71.39 00:13:24.178 lat (usec): min=116, max=1752, avg=253.10, stdev=77.07 00:13:24.178 clat percentiles (usec): 00:13:24.178 | 1.00th=[ 124], 5.00th=[ 145], 10.00th=[ 155], 20.00th=[ 167], 00:13:24.178 | 30.00th=[ 178], 40.00th=[ 194], 50.00th=[ 239], 60.00th=[ 260], 00:13:24.178 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 330], 00:13:24.178 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 433], 99.95th=[ 578], 00:13:24.178 | 99.99th=[ 1745] 00:13:24.178 bw ( KiB/s): min= 8192, max= 8192, per=28.15%, avg=8192.00, stdev= 0.00, samples=1 00:13:24.178 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:24.178 lat (usec) : 250=45.12%, 500=54.71%, 750=0.14% 00:13:24.178 lat (msec) : 2=0.03% 00:13:24.178 cpu : usr=1.80%, sys=6.20%, ctx=3564, majf=0, minf=11 00:13:24.178 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.178 issued rwts: total=1536,2017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.178 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.178 job3: (groupid=0, jobs=1): err= 0: pid=69632: Sun Nov 17 08:13:28 2024 00:13:24.178 read: IOPS=1219, BW=4879KiB/s (4996kB/s)(4884KiB/1001msec) 00:13:24.178 slat (nsec): min=8392, max=49955, avg=20398.38, stdev=5743.10 00:13:24.178 clat (usec): min=207, max=6057, avg=364.18, stdev=184.67 00:13:24.178 lat (usec): min=252, max=6096, avg=384.57, stdev=185.85 00:13:24.178 clat percentiles (usec): 00:13:24.178 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 297], 00:13:24.178 | 30.00th=[ 318], 40.00th=[ 334], 50.00th=[ 351], 60.00th=[ 363], 00:13:24.179 | 70.00th=[ 379], 80.00th=[ 412], 90.00th=[ 457], 95.00th=[ 486], 00:13:24.179 | 99.00th=[ 537], 99.50th=[ 627], 99.90th=[ 2040], 99.95th=[ 6063], 00:13:24.179 | 99.99th=[ 6063] 00:13:24.179 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:24.179 slat (usec): min=16, max=108, avg=36.84, stdev=10.37 00:13:24.179 clat (usec): min=138, max=496, avg=303.67, stdev=51.31 00:13:24.179 lat (usec): min=162, max=540, avg=340.51, stdev=57.01 00:13:24.179 clat percentiles (usec): 00:13:24.179 | 1.00th=[ 210], 5.00th=[ 233], 10.00th=[ 243], 20.00th=[ 260], 00:13:24.179 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 314], 00:13:24.179 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 396], 00:13:24.179 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 494], 99.95th=[ 498], 00:13:24.179 | 99.99th=[ 498] 00:13:24.179 bw ( KiB/s): min= 5800, max= 5800, per=19.93%, avg=5800.00, stdev= 0.00, samples=1 00:13:24.179 iops : min= 1450, max= 1450, avg=1450.00, stdev= 0.00, samples=1 00:13:24.179 lat (usec) : 250=8.09%, 500=90.53%, 750=1.20%, 1000=0.07% 00:13:24.179 lat (msec) : 2=0.04%, 4=0.04%, 10=0.04% 00:13:24.179 cpu : usr=1.70%, sys=7.40%, ctx=2766, majf=0, minf=13 00:13:24.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.179 issued rwts: total=1221,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.179 00:13:24.179 Run status group 0 (all jobs): 00:13:24.179 READ: bw=23.0MiB/s (24.1MB/s), 4879KiB/s-6386KiB/s (4996kB/s-6539kB/s), io=23.0MiB (24.1MB), run=1001-1001msec 00:13:24.179 WRITE: bw=28.4MiB/s (29.8MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.4MiB (29.8MB), run=1001-1001msec 00:13:24.179 00:13:24.179 Disk stats (read/write): 00:13:24.179 nvme0n1: ios=1586/1671, merge=0/0, ticks=456/384, in_queue=840, util=89.07% 00:13:24.179 nvme0n2: ios=1266/1536, merge=0/0, ticks=412/396, in_queue=808, util=90.20% 00:13:24.179 nvme0n3: ios=1567/1595, merge=0/0, ticks=491/355, in_queue=846, util=90.56% 00:13:24.179 nvme0n4: ios=1024/1321, merge=0/0, ticks=356/417, in_queue=773, util=89.90% 00:13:24.179 08:13:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:24.179 [global] 00:13:24.179 thread=1 00:13:24.179 invalidate=1 00:13:24.179 rw=write 00:13:24.179 time_based=1 00:13:24.179 runtime=1 00:13:24.179 ioengine=libaio 00:13:24.179 direct=1 00:13:24.179 bs=4096 00:13:24.179 iodepth=128 00:13:24.179 norandommap=0 00:13:24.179 numjobs=1 00:13:24.179 00:13:24.179 verify_dump=1 00:13:24.179 verify_backlog=512 00:13:24.179 verify_state_save=0 00:13:24.179 do_verify=1 00:13:24.179 verify=crc32c-intel 00:13:24.179 [job0] 00:13:24.179 filename=/dev/nvme0n1 00:13:24.179 [job1] 00:13:24.179 filename=/dev/nvme0n2 00:13:24.179 [job2] 00:13:24.179 filename=/dev/nvme0n3 00:13:24.179 [job3] 00:13:24.179 filename=/dev/nvme0n4 00:13:24.179 Could not set queue depth (nvme0n1) 00:13:24.179 Could not set queue depth (nvme0n2) 00:13:24.179 Could not set queue depth (nvme0n3) 00:13:24.179 Could not set queue depth (nvme0n4) 00:13:24.437 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.437 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.437 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.437 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.437 fio-3.35 00:13:24.437 Starting 4 threads 00:13:25.818 00:13:25.818 job0: (groupid=0, jobs=1): err= 0: pid=69694: Sun Nov 17 08:13:30 2024 00:13:25.818 read: IOPS=1561, BW=6245KiB/s (6395kB/s)(6276KiB/1005msec) 00:13:25.818 slat (usec): min=9, max=10958, avg=306.57, stdev=1361.56 00:13:25.818 clat (usec): min=772, max=53905, avg=38544.13, stdev=7623.64 00:13:25.818 lat (usec): min=6982, max=53962, avg=38850.70, stdev=7541.83 00:13:25.818 clat percentiles (usec): 00:13:25.818 | 1.00th=[ 7373], 5.00th=[28443], 10.00th=[31851], 20.00th=[34866], 00:13:25.818 | 30.00th=[35390], 40.00th=[35914], 50.00th=[36439], 60.00th=[38536], 00:13:25.818 | 70.00th=[41157], 80.00th=[46400], 90.00th=[49021], 95.00th=[51119], 00:13:25.818 | 99.00th=[51643], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:13:25.818 | 99.99th=[53740] 00:13:25.818 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:13:25.818 slat (usec): min=23, max=12726, avg=243.91, stdev=1213.52 00:13:25.818 clat (usec): min=12398, max=48003, avg=31873.94, stdev=6286.97 00:13:25.818 lat (usec): min=12454, max=48035, avg=32117.85, stdev=6213.09 00:13:25.818 clat percentiles (usec): 00:13:25.818 | 1.00th=[19006], 5.00th=[25560], 10.00th=[26608], 20.00th=[27132], 00:13:25.818 | 30.00th=[27395], 40.00th=[28181], 50.00th=[31065], 60.00th=[32113], 00:13:25.818 | 70.00th=[34341], 80.00th=[38011], 90.00th=[41681], 95.00th=[43254], 00:13:25.818 | 99.00th=[44827], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:13:25.818 | 99.99th=[47973] 00:13:25.818 bw ( KiB/s): min= 7432, max= 8175, per=20.71%, avg=7803.50, stdev=525.38, samples=2 00:13:25.818 iops : min= 1858, max= 2043, avg=1950.50, stdev=130.81, samples=2 00:13:25.818 lat (usec) : 1000=0.03% 00:13:25.818 lat (msec) : 10=0.44%, 20=1.66%, 50=94.53%, 100=3.35% 00:13:25.818 cpu : usr=2.09%, sys=8.37%, ctx=224, majf=0, minf=14 00:13:25.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:13:25.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.818 issued rwts: total=1569,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.818 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.818 job1: (groupid=0, jobs=1): err= 0: pid=69699: Sun Nov 17 08:13:30 2024 00:13:25.818 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:13:25.818 slat (usec): min=9, max=4672, avg=126.89, stdev=595.45 00:13:25.818 clat (usec): min=12141, max=19952, avg=16993.55, stdev=1199.63 00:13:25.818 lat (usec): min=15016, max=19984, avg=17120.44, stdev=1058.08 00:13:25.818 clat percentiles (usec): 00:13:25.818 | 1.00th=[13304], 5.00th=[15401], 10.00th=[15664], 20.00th=[15926], 00:13:25.818 | 30.00th=[16319], 40.00th=[16712], 50.00th=[17171], 60.00th=[17433], 00:13:25.818 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:13:25.818 | 99.00th=[19792], 99.50th=[19792], 99.90th=[20055], 99.95th=[20055], 00:13:25.818 | 99.99th=[20055] 00:13:25.818 write: IOPS=3865, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1002msec); 0 zone resets 00:13:25.818 slat (usec): min=23, max=4333, avg=128.78, stdev=536.53 00:13:25.818 clat (usec): min=535, max=19776, avg=16884.02, stdev=1755.79 00:13:25.818 lat (usec): min=4234, max=19807, avg=17012.80, stdev=1671.77 00:13:25.818 clat percentiles (usec): 00:13:25.818 | 1.00th=[ 8455], 5.00th=[15008], 10.00th=[15664], 20.00th=[16188], 00:13:25.818 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:13:25.818 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18744], 00:13:25.818 | 99.00th=[19792], 99.50th=[19792], 99.90th=[19792], 99.95th=[19792], 00:13:25.818 | 99.99th=[19792] 00:13:25.818 bw ( KiB/s): min=13824, max=16168, per=39.81%, avg=14996.00, stdev=1657.46, samples=2 00:13:25.818 iops : min= 3456, max= 4042, avg=3749.00, stdev=414.36, samples=2 00:13:25.818 lat (usec) : 750=0.01% 00:13:25.818 lat (msec) : 10=0.86%, 20=99.13% 00:13:25.818 cpu : usr=4.39%, sys=15.67%, ctx=260, majf=0, minf=10 00:13:25.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:25.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.818 issued rwts: total=3584,3873,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.818 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.818 job2: (groupid=0, jobs=1): err= 0: pid=69701: Sun Nov 17 08:13:30 2024 00:13:25.818 read: IOPS=1461, BW=5845KiB/s (5985kB/s)(5868KiB/1004msec) 00:13:25.818 slat (usec): min=9, max=9821, avg=265.93, stdev=1152.51 00:13:25.818 clat (usec): min=814, max=62749, avg=31847.41, stdev=10698.16 00:13:25.818 lat (usec): min=6951, max=62783, avg=32113.34, stdev=10775.06 00:13:25.818 clat percentiles (usec): 00:13:25.818 | 1.00th=[ 7373], 5.00th=[22152], 10.00th=[23725], 20.00th=[23987], 00:13:25.818 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[29230], 00:13:25.818 | 70.00th=[37487], 80.00th=[45351], 90.00th=[47973], 95.00th=[49546], 00:13:25.818 | 99.00th=[55837], 99.50th=[58459], 99.90th=[62653], 99.95th=[62653], 00:13:25.818 | 99.99th=[62653] 00:13:25.818 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:13:25.818 slat (usec): min=23, max=9654, avg=385.36, stdev=1398.62 00:13:25.818 clat (msec): min=27, max=111, avg=51.86, stdev=21.90 00:13:25.818 lat (msec): min=27, max=111, avg=52.24, stdev=22.03 00:13:25.818 clat percentiles (msec): 00:13:25.818 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:13:25.818 | 30.00th=[ 38], 40.00th=[ 44], 50.00th=[ 46], 60.00th=[ 48], 00:13:25.818 | 70.00th=[ 53], 80.00th=[ 72], 90.00th=[ 91], 95.00th=[ 100], 00:13:25.818 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 111], 00:13:25.818 | 99.99th=[ 111] 00:13:25.818 bw ( KiB/s): min= 4784, max= 7504, per=16.31%, avg=6144.00, stdev=1923.33, samples=2 00:13:25.818 iops : min= 1196, max= 1876, avg=1536.00, stdev=480.83, samples=2 00:13:25.818 lat (usec) : 1000=0.03% 00:13:25.818 lat (msec) : 10=0.73%, 20=0.70%, 50=78.95%, 100=17.08%, 250=2.50% 00:13:25.818 cpu : usr=2.09%, sys=6.88%, ctx=207, majf=0, minf=17 00:13:25.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:13:25.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.818 issued rwts: total=1467,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.818 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.818 job3: (groupid=0, jobs=1): err= 0: pid=69702: Sun Nov 17 08:13:30 2024 00:13:25.818 read: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec) 00:13:25.818 slat (usec): min=18, max=15018, avg=307.35, stdev=1627.67 00:13:25.818 clat (usec): min=24618, max=52650, avg=39957.09, stdev=6396.53 00:13:25.818 lat (usec): min=30608, max=52682, avg=40264.44, stdev=6250.34 00:13:25.818 clat percentiles (usec): 00:13:25.818 | 1.00th=[27395], 5.00th=[31589], 10.00th=[34341], 20.00th=[34866], 00:13:25.818 | 30.00th=[35390], 40.00th=[35914], 50.00th=[36963], 60.00th=[40109], 00:13:25.818 | 70.00th=[44303], 80.00th=[47449], 90.00th=[49546], 95.00th=[51643], 00:13:25.818 | 99.00th=[52167], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:13:25.819 | 99.99th=[52691] 00:13:25.819 write: IOPS=2003, BW=8016KiB/s (8208kB/s)(8032KiB/1002msec); 0 zone resets 00:13:25.819 slat (usec): min=23, max=13880, avg=252.33, stdev=1258.41 00:13:25.819 clat (usec): min=541, max=48675, avg=31529.24, stdev=6731.02 00:13:25.819 lat (usec): min=6476, max=48734, avg=31781.56, stdev=6645.08 00:13:25.819 clat percentiles (usec): 00:13:25.819 | 1.00th=[10945], 5.00th=[23462], 10.00th=[26870], 20.00th=[27132], 00:13:25.819 | 30.00th=[27395], 40.00th=[28181], 50.00th=[29754], 60.00th=[32113], 00:13:25.819 | 70.00th=[33817], 80.00th=[36439], 90.00th=[42206], 95.00th=[42730], 00:13:25.819 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:13:25.819 | 99.99th=[48497] 00:13:25.819 bw ( KiB/s): min= 6856, max= 8192, per=19.97%, avg=7524.00, stdev=944.69, samples=2 00:13:25.819 iops : min= 1714, max= 2048, avg=1881.00, stdev=236.17, samples=2 00:13:25.819 lat (usec) : 750=0.03% 00:13:25.819 lat (msec) : 10=0.20%, 20=0.96%, 50=95.29%, 100=3.53% 00:13:25.819 cpu : usr=1.90%, sys=8.39%, ctx=194, majf=0, minf=5 00:13:25.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:13:25.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.819 issued rwts: total=1536,2008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.819 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.819 00:13:25.819 Run status group 0 (all jobs): 00:13:25.819 READ: bw=31.7MiB/s (33.2MB/s), 5845KiB/s-14.0MiB/s (5985kB/s-14.7MB/s), io=31.9MiB (33.4MB), run=1002-1005msec 00:13:25.819 WRITE: bw=36.8MiB/s (38.6MB/s), 6120KiB/s-15.1MiB/s (6266kB/s-15.8MB/s), io=37.0MiB (38.8MB), run=1002-1005msec 00:13:25.819 00:13:25.819 Disk stats (read/write): 00:13:25.819 nvme0n1: ios=1586/1536, merge=0/0, ticks=14884/9753, in_queue=24637, util=87.78% 00:13:25.819 nvme0n2: ios=3121/3328, merge=0/0, ticks=11917/12034, in_queue=23951, util=89.39% 00:13:25.819 nvme0n3: ios=1099/1536, merge=0/0, ticks=10076/25075, in_queue=35151, util=89.92% 00:13:25.819 nvme0n4: ios=1504/1536, merge=0/0, ticks=14260/10373, in_queue=24633, util=88.73% 00:13:25.819 08:13:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:25.819 [global] 00:13:25.819 thread=1 00:13:25.819 invalidate=1 00:13:25.819 rw=randwrite 00:13:25.819 time_based=1 00:13:25.819 runtime=1 00:13:25.819 ioengine=libaio 00:13:25.819 direct=1 00:13:25.819 bs=4096 00:13:25.819 iodepth=128 00:13:25.819 norandommap=0 00:13:25.819 numjobs=1 00:13:25.819 00:13:25.819 verify_dump=1 00:13:25.819 verify_backlog=512 00:13:25.819 verify_state_save=0 00:13:25.819 do_verify=1 00:13:25.819 verify=crc32c-intel 00:13:25.819 [job0] 00:13:25.819 filename=/dev/nvme0n1 00:13:25.819 [job1] 00:13:25.819 filename=/dev/nvme0n2 00:13:25.819 [job2] 00:13:25.819 filename=/dev/nvme0n3 00:13:25.819 [job3] 00:13:25.819 filename=/dev/nvme0n4 00:13:25.819 Could not set queue depth (nvme0n1) 00:13:25.819 Could not set queue depth (nvme0n2) 00:13:25.819 Could not set queue depth (nvme0n3) 00:13:25.819 Could not set queue depth (nvme0n4) 00:13:25.819 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:25.819 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:25.819 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:25.819 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:25.819 fio-3.35 00:13:25.819 Starting 4 threads 00:13:27.199 00:13:27.199 job0: (groupid=0, jobs=1): err= 0: pid=69755: Sun Nov 17 08:13:31 2024 00:13:27.199 read: IOPS=2637, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1004msec) 00:13:27.199 slat (usec): min=14, max=7207, avg=168.85, stdev=826.84 00:13:27.199 clat (usec): min=3382, max=25043, avg=21918.34, stdev=2832.78 00:13:27.199 lat (usec): min=3403, max=25078, avg=22087.18, stdev=2720.68 00:13:27.199 clat percentiles (usec): 00:13:27.199 | 1.00th=[ 8586], 5.00th=[17957], 10.00th=[20055], 20.00th=[21103], 00:13:27.199 | 30.00th=[21627], 40.00th=[22152], 50.00th=[22676], 60.00th=[22938], 00:13:27.199 | 70.00th=[23200], 80.00th=[23462], 90.00th=[23987], 95.00th=[24249], 00:13:27.199 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25035], 99.95th=[25035], 00:13:27.199 | 99.99th=[25035] 00:13:27.199 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:13:27.199 slat (usec): min=23, max=5502, avg=167.54, stdev=753.15 00:13:27.199 clat (usec): min=15282, max=25128, avg=22274.47, stdev=1349.92 00:13:27.199 lat (usec): min=16891, max=25160, avg=22442.01, stdev=1131.73 00:13:27.199 clat percentiles (usec): 00:13:27.199 | 1.00th=[17695], 5.00th=[20055], 10.00th=[20579], 20.00th=[21365], 00:13:27.199 | 30.00th=[21890], 40.00th=[22152], 50.00th=[22414], 60.00th=[22676], 00:13:27.199 | 70.00th=[22938], 80.00th=[23462], 90.00th=[23725], 95.00th=[24249], 00:13:27.199 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:13:27.199 | 99.99th=[25035] 00:13:27.199 bw ( KiB/s): min=11952, max=12288, per=22.88%, avg=12120.00, stdev=237.59, samples=2 00:13:27.199 iops : min= 2988, max= 3072, avg=3030.00, stdev=59.40, samples=2 00:13:27.199 lat (msec) : 4=0.40%, 10=0.58%, 20=6.52%, 50=92.50% 00:13:27.199 cpu : usr=3.69%, sys=12.26%, ctx=180, majf=0, minf=13 00:13:27.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:27.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:27.199 issued rwts: total=2648,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:27.199 job1: (groupid=0, jobs=1): err= 0: pid=69756: Sun Nov 17 08:13:31 2024 00:13:27.199 read: IOPS=3664, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1004msec) 00:13:27.199 slat (usec): min=15, max=4437, avg=121.18, stdev=562.23 00:13:27.199 clat (usec): min=3433, max=18718, avg=16214.00, stdev=1656.12 00:13:27.199 lat (usec): min=3451, max=18737, avg=16335.17, stdev=1566.16 00:13:27.199 clat percentiles (usec): 00:13:27.199 | 1.00th=[ 8717], 5.00th=[13960], 10.00th=[15270], 20.00th=[15664], 00:13:27.199 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16450], 60.00th=[16581], 00:13:27.199 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17433], 95.00th=[17957], 00:13:27.199 | 99.00th=[18482], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:13:27.199 | 99.99th=[18744] 00:13:27.199 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:13:27.199 slat (usec): min=23, max=3949, avg=122.47, stdev=498.72 00:13:27.199 clat (usec): min=12193, max=19256, avg=16330.06, stdev=988.12 00:13:27.199 lat (usec): min=14592, max=20039, avg=16452.53, stdev=860.02 00:13:27.199 clat percentiles (usec): 00:13:27.199 | 1.00th=[13173], 5.00th=[15139], 10.00th=[15270], 20.00th=[15664], 00:13:27.199 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16319], 60.00th=[16450], 00:13:27.199 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17433], 95.00th=[18220], 00:13:27.199 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:13:27.199 | 99.99th=[19268] 00:13:27.199 bw ( KiB/s): min=16095, max=16416, per=30.68%, avg=16255.50, stdev=226.98, samples=2 00:13:27.199 iops : min= 4023, max= 4104, avg=4063.50, stdev=57.28, samples=2 00:13:27.199 lat (msec) : 4=0.30%, 10=0.51%, 20=99.19% 00:13:27.199 cpu : usr=4.49%, sys=17.15%, ctx=243, majf=0, minf=16 00:13:27.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:27.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:27.199 issued rwts: total=3679,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:27.199 job2: (groupid=0, jobs=1): err= 0: pid=69757: Sun Nov 17 08:13:31 2024 00:13:27.199 read: IOPS=3176, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1005msec) 00:13:27.199 slat (usec): min=5, max=4799, avg=142.47, stdev=652.77 00:13:27.199 clat (usec): min=3826, max=22783, avg=18776.45, stdev=1963.27 00:13:27.199 lat (usec): min=3845, max=22979, avg=18918.93, stdev=1864.04 00:13:27.199 clat percentiles (usec): 00:13:27.199 | 1.00th=[ 9372], 5.00th=[15926], 10.00th=[17433], 20.00th=[18220], 00:13:27.199 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:13:27.199 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20317], 95.00th=[20841], 00:13:27.199 | 99.00th=[21365], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:13:27.199 | 99.99th=[22676] 00:13:27.199 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:13:27.199 slat (usec): min=6, max=7306, avg=140.37, stdev=610.42 00:13:27.199 clat (usec): min=13510, max=23042, avg=18582.68, stdev=1173.54 00:13:27.200 lat (usec): min=15187, max=24153, avg=18723.05, stdev=1051.37 00:13:27.200 clat percentiles (usec): 00:13:27.200 | 1.00th=[14877], 5.00th=[16909], 10.00th=[17171], 20.00th=[17695], 00:13:27.200 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18744], 60.00th=[19006], 00:13:27.200 | 70.00th=[19268], 80.00th=[19530], 90.00th=[20055], 95.00th=[20317], 00:13:27.200 | 99.00th=[20841], 99.50th=[20841], 99.90th=[22938], 99.95th=[22938], 00:13:27.200 | 99.99th=[22938] 00:13:27.200 bw ( KiB/s): min=13867, max=14746, per=27.00%, avg=14306.50, stdev=621.55, samples=2 00:13:27.200 iops : min= 3466, max= 3686, avg=3576.00, stdev=155.56, samples=2 00:13:27.200 lat (msec) : 4=0.12%, 10=0.69%, 20=86.69%, 50=12.50% 00:13:27.200 cpu : usr=3.69%, sys=13.94%, ctx=266, majf=0, minf=11 00:13:27.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:27.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:27.200 issued rwts: total=3192,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:27.200 job3: (groupid=0, jobs=1): err= 0: pid=69758: Sun Nov 17 08:13:31 2024 00:13:27.200 read: IOPS=2478, BW=9914KiB/s (10.2MB/s)(9964KiB/1005msec) 00:13:27.200 slat (usec): min=9, max=7708, avg=196.28, stdev=817.91 00:13:27.200 clat (usec): min=665, max=33906, avg=25053.07, stdev=3595.25 00:13:27.200 lat (usec): min=5020, max=33952, avg=25249.35, stdev=3646.89 00:13:27.200 clat percentiles (usec): 00:13:27.200 | 1.00th=[ 9241], 5.00th=[20579], 10.00th=[22676], 20.00th=[23725], 00:13:27.200 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25297], 60.00th=[25822], 00:13:27.200 | 70.00th=[26084], 80.00th=[27132], 90.00th=[28443], 95.00th=[29754], 00:13:27.200 | 99.00th=[32113], 99.50th=[32375], 99.90th=[33424], 99.95th=[33424], 00:13:27.200 | 99.99th=[33817] 00:13:27.200 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:13:27.200 slat (usec): min=18, max=8864, avg=188.38, stdev=899.83 00:13:27.200 clat (usec): min=19166, max=34018, avg=25001.31, stdev=2252.83 00:13:27.200 lat (usec): min=19202, max=34080, avg=25189.69, stdev=2390.91 00:13:27.200 clat percentiles (usec): 00:13:27.200 | 1.00th=[20055], 5.00th=[20841], 10.00th=[22676], 20.00th=[23462], 00:13:27.200 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:13:27.200 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27395], 95.00th=[30016], 00:13:27.200 | 99.00th=[31851], 99.50th=[32900], 99.90th=[33162], 99.95th=[33162], 00:13:27.200 | 99.99th=[33817] 00:13:27.200 bw ( KiB/s): min= 9556, max=10904, per=19.31%, avg=10230.00, stdev=953.18, samples=2 00:13:27.200 iops : min= 2389, max= 2726, avg=2557.50, stdev=238.29, samples=2 00:13:27.200 lat (usec) : 750=0.02% 00:13:27.200 lat (msec) : 10=0.83%, 20=1.11%, 50=98.04% 00:13:27.200 cpu : usr=2.79%, sys=10.76%, ctx=234, majf=0, minf=13 00:13:27.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:27.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:27.200 issued rwts: total=2491,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:27.200 00:13:27.200 Run status group 0 (all jobs): 00:13:27.200 READ: bw=46.7MiB/s (48.9MB/s), 9914KiB/s-14.3MiB/s (10.2MB/s-15.0MB/s), io=46.9MiB (49.2MB), run=1004-1005msec 00:13:27.200 WRITE: bw=51.7MiB/s (54.3MB/s), 9.95MiB/s-15.9MiB/s (10.4MB/s-16.7MB/s), io=52.0MiB (54.5MB), run=1004-1005msec 00:13:27.200 00:13:27.200 Disk stats (read/write): 00:13:27.200 nvme0n1: ios=2322/2560, merge=0/0, ticks=11502/12573, in_queue=24075, util=87.26% 00:13:27.200 nvme0n2: ios=3121/3501, merge=0/0, ticks=10932/11817, in_queue=22749, util=86.97% 00:13:27.200 nvme0n3: ios=2625/3072, merge=0/0, ticks=11354/11988, in_queue=23342, util=88.17% 00:13:27.200 nvme0n4: ios=2048/2176, merge=0/0, ticks=17165/16047, in_queue=33212, util=89.53% 00:13:27.200 08:13:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:27.200 08:13:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69771 00:13:27.200 08:13:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:27.200 08:13:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:27.200 [global] 00:13:27.200 thread=1 00:13:27.200 invalidate=1 00:13:27.200 rw=read 00:13:27.200 time_based=1 00:13:27.200 runtime=10 00:13:27.200 ioengine=libaio 00:13:27.200 direct=1 00:13:27.200 bs=4096 00:13:27.200 iodepth=1 00:13:27.200 norandommap=1 00:13:27.200 numjobs=1 00:13:27.200 00:13:27.200 [job0] 00:13:27.200 filename=/dev/nvme0n1 00:13:27.200 [job1] 00:13:27.200 filename=/dev/nvme0n2 00:13:27.200 [job2] 00:13:27.200 filename=/dev/nvme0n3 00:13:27.200 [job3] 00:13:27.200 filename=/dev/nvme0n4 00:13:27.200 Could not set queue depth (nvme0n1) 00:13:27.200 Could not set queue depth (nvme0n2) 00:13:27.200 Could not set queue depth (nvme0n3) 00:13:27.200 Could not set queue depth (nvme0n4) 00:13:27.200 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:27.200 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:27.200 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:27.200 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:27.200 fio-3.35 00:13:27.200 Starting 4 threads 00:13:30.491 08:13:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:30.491 fio: pid=69814, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:30.491 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=35606528, buflen=4096 00:13:30.491 08:13:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:30.491 fio: pid=69813, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:30.491 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=35565568, buflen=4096 00:13:30.491 08:13:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:30.491 08:13:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:30.751 fio: pid=69811, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:30.751 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=41336832, buflen=4096 00:13:30.751 08:13:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:30.751 08:13:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:31.011 fio: pid=69812, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:31.011 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51367936, buflen=4096 00:13:31.011 00:13:31.011 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69811: Sun Nov 17 08:13:35 2024 00:13:31.011 read: IOPS=3143, BW=12.3MiB/s (12.9MB/s)(39.4MiB/3211msec) 00:13:31.011 slat (usec): min=5, max=16443, avg=15.44, stdev=235.74 00:13:31.011 clat (usec): min=121, max=7824, avg=301.68, stdev=161.15 00:13:31.011 lat (usec): min=135, max=16660, avg=317.12, stdev=284.44 00:13:31.011 clat percentiles (usec): 00:13:31.011 | 1.00th=[ 153], 5.00th=[ 190], 10.00th=[ 208], 20.00th=[ 255], 00:13:31.011 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 314], 00:13:31.011 | 70.00th=[ 330], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 375], 00:13:31.011 | 99.00th=[ 412], 99.50th=[ 465], 99.90th=[ 1713], 99.95th=[ 3720], 00:13:31.011 | 99.99th=[ 6521] 00:13:31.011 bw ( KiB/s): min=11688, max=13384, per=28.04%, avg=12364.00, stdev=583.55, samples=6 00:13:31.011 iops : min= 2922, max= 3346, avg=3091.00, stdev=145.89, samples=6 00:13:31.011 lat (usec) : 250=18.38%, 500=81.18%, 750=0.20%, 1000=0.06% 00:13:31.011 lat (msec) : 2=0.07%, 4=0.05%, 10=0.05% 00:13:31.011 cpu : usr=0.72%, sys=3.55%, ctx=10100, majf=0, minf=1 00:13:31.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.011 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.011 issued rwts: total=10093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.011 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69812: Sun Nov 17 08:13:35 2024 00:13:31.011 read: IOPS=3455, BW=13.5MiB/s (14.1MB/s)(49.0MiB/3630msec) 00:13:31.011 slat (usec): min=7, max=10368, avg=14.95, stdev=183.93 00:13:31.011 clat (usec): min=4, max=3318, avg=273.46, stdev=83.09 00:13:31.011 lat (usec): min=116, max=10715, avg=288.41, stdev=202.53 00:13:31.011 clat percentiles (usec): 00:13:31.011 | 1.00th=[ 139], 5.00th=[ 155], 10.00th=[ 169], 20.00th=[ 210], 00:13:31.011 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 289], 00:13:31.011 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 363], 95.00th=[ 388], 00:13:31.011 | 99.00th=[ 420], 99.50th=[ 437], 99.90th=[ 717], 99.95th=[ 1549], 00:13:31.011 | 99.99th=[ 2311] 00:13:31.011 bw ( KiB/s): min=12584, max=20328, per=31.44%, avg=13860.86, stdev=2856.57, samples=7 00:13:31.011 iops : min= 3146, max= 5082, avg=3465.14, stdev=714.18, samples=7 00:13:31.011 lat (usec) : 10=0.01%, 250=29.04%, 500=70.75%, 750=0.10%, 1000=0.02% 00:13:31.011 lat (msec) : 2=0.02%, 4=0.04% 00:13:31.011 cpu : usr=0.88%, sys=3.31%, ctx=12556, majf=0, minf=2 00:13:31.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.011 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.011 issued rwts: total=12542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.011 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69813: Sun Nov 17 08:13:35 2024 00:13:31.011 read: IOPS=2870, BW=11.2MiB/s (11.8MB/s)(33.9MiB/3025msec) 00:13:31.011 slat (usec): min=8, max=12682, avg=24.57, stdev=152.77 00:13:31.011 clat (usec): min=193, max=2933, avg=321.07, stdev=60.12 00:13:31.011 lat (usec): min=207, max=13006, avg=345.65, stdev=164.33 00:13:31.011 clat percentiles (usec): 00:13:31.011 | 1.00th=[ 227], 5.00th=[ 249], 10.00th=[ 265], 20.00th=[ 281], 00:13:31.011 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:13:31.011 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 383], 95.00th=[ 396], 00:13:31.011 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 482], 99.95th=[ 799], 00:13:31.011 | 99.99th=[ 2933] 00:13:31.011 bw ( KiB/s): min=10984, max=11800, per=26.23%, avg=11563.00, stdev=335.03, samples=5 00:13:31.011 iops : min= 2746, max= 2950, avg=2890.60, stdev=83.65, samples=5 00:13:31.011 lat (usec) : 250=5.65%, 500=94.27%, 750=0.01%, 1000=0.01% 00:13:31.011 lat (msec) : 2=0.02%, 4=0.02% 00:13:31.011 cpu : usr=1.88%, sys=5.29%, ctx=8687, majf=0, minf=2 00:13:31.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.011 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.011 issued rwts: total=8684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.011 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69814: Sun Nov 17 08:13:35 2024 00:13:31.011 read: IOPS=3070, BW=12.0MiB/s (12.6MB/s)(34.0MiB/2831msec) 00:13:31.011 slat (usec): min=6, max=137, avg=11.73, stdev= 4.55 00:13:31.011 clat (usec): min=141, max=3016, avg=312.28, stdev=59.33 00:13:31.011 lat (usec): min=148, max=3029, avg=324.00, stdev=59.14 00:13:31.011 clat percentiles (usec): 00:13:31.011 | 1.00th=[ 202], 5.00th=[ 249], 10.00th=[ 262], 20.00th=[ 277], 00:13:31.011 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 322], 00:13:31.011 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 367], 95.00th=[ 379], 00:13:31.011 | 99.00th=[ 412], 99.50th=[ 433], 99.90th=[ 611], 99.95th=[ 857], 00:13:31.011 | 99.99th=[ 3032] 00:13:31.011 bw ( KiB/s): min=11696, max=12952, per=27.99%, avg=12340.80, stdev=474.25, samples=5 00:13:31.011 iops : min= 2924, max= 3238, avg=3085.20, stdev=118.56, samples=5 00:13:31.011 lat (usec) : 250=5.65%, 500=94.10%, 750=0.18%, 1000=0.01% 00:13:31.011 lat (msec) : 2=0.02%, 4=0.02% 00:13:31.011 cpu : usr=1.13%, sys=3.46%, ctx=8695, majf=0, minf=2 00:13:31.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.011 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.011 issued rwts: total=8694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.011 00:13:31.011 Run status group 0 (all jobs): 00:13:31.011 READ: bw=43.1MiB/s (45.1MB/s), 11.2MiB/s-13.5MiB/s (11.8MB/s-14.1MB/s), io=156MiB (164MB), run=2831-3630msec 00:13:31.011 00:13:31.011 Disk stats (read/write): 00:13:31.011 nvme0n1: ios=9678/0, merge=0/0, ticks=2835/0, in_queue=2835, util=94.24% 00:13:31.011 nvme0n2: ios=12512/0, merge=0/0, ticks=3448/0, in_queue=3448, util=95.64% 00:13:31.011 nvme0n3: ios=8334/0, merge=0/0, ticks=2700/0, in_queue=2700, util=96.54% 00:13:31.011 nvme0n4: ios=8086/0, merge=0/0, ticks=2485/0, in_queue=2485, util=96.36% 00:13:31.271 08:13:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:31.271 08:13:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:31.840 08:13:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:31.840 08:13:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:32.099 08:13:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:32.099 08:13:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:32.667 08:13:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:32.667 08:13:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:32.926 08:13:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:32.926 08:13:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 69771 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.496 nvmf hotplug test: fio failed as expected 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:33.496 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:33.755 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:33.755 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:33.755 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.756 rmmod nvme_tcp 00:13:33.756 rmmod nvme_fabrics 00:13:33.756 rmmod nvme_keyring 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 69389 ']' 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 69389 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 69389 ']' 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 69389 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69389 00:13:33.756 killing process with pid 69389 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69389' 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 69389 00:13:33.756 08:13:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 69389 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:35.136 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:35.137 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:35.137 08:13:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:35.137 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:35.137 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:35.137 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:35.137 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.137 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.137 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:35.137 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.396 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.396 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.396 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:13:35.396 00:13:35.396 real 0m22.089s 00:13:35.396 user 1m20.588s 00:13:35.396 sys 0m9.838s 00:13:35.396 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.396 ************************************ 00:13:35.396 END TEST nvmf_fio_target 00:13:35.396 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.396 ************************************ 00:13:35.396 08:13:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:35.396 08:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:35.396 08:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.396 08:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:35.396 ************************************ 00:13:35.396 START TEST nvmf_bdevio 00:13:35.396 ************************************ 00:13:35.396 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:35.396 * Looking for test storage... 00:13:35.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:35.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.657 --rc genhtml_branch_coverage=1 00:13:35.657 --rc genhtml_function_coverage=1 00:13:35.657 --rc genhtml_legend=1 00:13:35.657 --rc geninfo_all_blocks=1 00:13:35.657 --rc geninfo_unexecuted_blocks=1 00:13:35.657 00:13:35.657 ' 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:35.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.657 --rc genhtml_branch_coverage=1 00:13:35.657 --rc genhtml_function_coverage=1 00:13:35.657 --rc genhtml_legend=1 00:13:35.657 --rc geninfo_all_blocks=1 00:13:35.657 --rc geninfo_unexecuted_blocks=1 00:13:35.657 00:13:35.657 ' 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:35.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.657 --rc genhtml_branch_coverage=1 00:13:35.657 --rc genhtml_function_coverage=1 00:13:35.657 --rc genhtml_legend=1 00:13:35.657 --rc geninfo_all_blocks=1 00:13:35.657 --rc geninfo_unexecuted_blocks=1 00:13:35.657 00:13:35.657 ' 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:35.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.657 --rc genhtml_branch_coverage=1 00:13:35.657 --rc genhtml_function_coverage=1 00:13:35.657 --rc genhtml_legend=1 00:13:35.657 --rc geninfo_all_blocks=1 00:13:35.657 --rc geninfo_unexecuted_blocks=1 00:13:35.657 00:13:35.657 ' 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:35.657 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.658 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:35.658 Cannot find device "nvmf_init_br" 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:35.658 Cannot find device "nvmf_init_br2" 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:35.658 Cannot find device "nvmf_tgt_br" 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:35.658 Cannot find device "nvmf_tgt_br2" 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:35.658 Cannot find device "nvmf_init_br" 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:13:35.658 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:35.917 Cannot find device "nvmf_init_br2" 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:35.917 Cannot find device "nvmf_tgt_br" 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:35.917 Cannot find device "nvmf_tgt_br2" 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:35.917 Cannot find device "nvmf_br" 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:35.917 Cannot find device "nvmf_init_if" 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:35.917 Cannot find device "nvmf_init_if2" 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:35.917 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:36.177 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:36.177 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:36.177 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:36.177 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:36.177 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:36.177 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:36.177 08:13:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:36.177 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:36.177 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.122 ms 00:13:36.177 00:13:36.177 --- 10.0.0.3 ping statistics --- 00:13:36.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.177 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:36.177 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:36.177 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:13:36.177 00:13:36.177 --- 10.0.0.4 ping statistics --- 00:13:36.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.177 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:36.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:13:36.177 00:13:36.177 --- 10.0.0.1 ping statistics --- 00:13:36.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.177 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:36.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:13:36.177 00:13:36.177 --- 10.0.0.2 ping statistics --- 00:13:36.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.177 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=70160 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 70160 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 70160 ']' 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.177 08:13:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.438 [2024-11-17 08:13:41.275217] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:36.438 [2024-11-17 08:13:41.275357] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.697 [2024-11-17 08:13:41.467050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.697 [2024-11-17 08:13:41.603420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.697 [2024-11-17 08:13:41.603478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.697 [2024-11-17 08:13:41.603496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.697 [2024-11-17 08:13:41.603507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.697 [2024-11-17 08:13:41.603522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.697 [2024-11-17 08:13:41.606209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:36.697 [2024-11-17 08:13:41.606590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:36.697 [2024-11-17 08:13:41.606817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:36.697 [2024-11-17 08:13:41.606821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.956 [2024-11-17 08:13:41.858531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:37.215 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.215 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:13:37.215 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:37.215 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:37.216 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:37.216 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.216 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:37.216 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.216 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:37.216 [2024-11-17 08:13:42.116934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.216 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.216 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:37.216 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.216 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:37.475 Malloc0 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:37.475 [2024-11-17 08:13:42.269798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:37.475 { 00:13:37.475 "params": { 00:13:37.475 "name": "Nvme$subsystem", 00:13:37.475 "trtype": "$TEST_TRANSPORT", 00:13:37.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:37.475 "adrfam": "ipv4", 00:13:37.475 "trsvcid": "$NVMF_PORT", 00:13:37.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:37.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:37.475 "hdgst": ${hdgst:-false}, 00:13:37.475 "ddgst": ${ddgst:-false} 00:13:37.475 }, 00:13:37.475 "method": "bdev_nvme_attach_controller" 00:13:37.475 } 00:13:37.475 EOF 00:13:37.475 )") 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:13:37.475 08:13:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:37.475 "params": { 00:13:37.475 "name": "Nvme1", 00:13:37.475 "trtype": "tcp", 00:13:37.475 "traddr": "10.0.0.3", 00:13:37.475 "adrfam": "ipv4", 00:13:37.475 "trsvcid": "4420", 00:13:37.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:37.475 "hdgst": false, 00:13:37.475 "ddgst": false 00:13:37.475 }, 00:13:37.475 "method": "bdev_nvme_attach_controller" 00:13:37.475 }' 00:13:37.475 [2024-11-17 08:13:42.381595] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:37.475 [2024-11-17 08:13:42.381716] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70196 ] 00:13:37.735 [2024-11-17 08:13:42.566334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:37.735 [2024-11-17 08:13:42.713045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.735 [2024-11-17 08:13:42.713192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.735 [2024-11-17 08:13:42.713241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.994 [2024-11-17 08:13:42.986179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:38.254 I/O targets: 00:13:38.254 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:38.254 00:13:38.254 00:13:38.254 CUnit - A unit testing framework for C - Version 2.1-3 00:13:38.254 http://cunit.sourceforge.net/ 00:13:38.254 00:13:38.254 00:13:38.254 Suite: bdevio tests on: Nvme1n1 00:13:38.254 Test: blockdev write read block ...passed 00:13:38.254 Test: blockdev write zeroes read block ...passed 00:13:38.254 Test: blockdev write zeroes read no split ...passed 00:13:38.514 Test: blockdev write zeroes read split ...passed 00:13:38.514 Test: blockdev write zeroes read split partial ...passed 00:13:38.514 Test: blockdev reset ...[2024-11-17 08:13:43.309347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:38.514 [2024-11-17 08:13:43.309490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:13:38.514 [2024-11-17 08:13:43.327926] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:38.514 passed 00:13:38.514 Test: blockdev write read 8 blocks ...passed 00:13:38.514 Test: blockdev write read size > 128k ...passed 00:13:38.514 Test: blockdev write read invalid size ...passed 00:13:38.514 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:38.514 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:38.514 Test: blockdev write read max offset ...passed 00:13:38.514 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:38.514 Test: blockdev writev readv 8 blocks ...passed 00:13:38.514 Test: blockdev writev readv 30 x 1block ...passed 00:13:38.514 Test: blockdev writev readv block ...passed 00:13:38.514 Test: blockdev writev readv size > 128k ...passed 00:13:38.514 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:38.514 Test: blockdev comparev and writev ...[2024-11-17 08:13:43.339129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.514 [2024-11-17 08:13:43.339195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:38.514 [2024-11-17 08:13:43.339223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.514 [2024-11-17 08:13:43.339243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:38.514 [2024-11-17 08:13:43.339689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.514 [2024-11-17 08:13:43.339724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:38.514 [2024-11-17 08:13:43.339743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.514 [2024-11-17 08:13:43.339760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:38.514 [2024-11-17 08:13:43.340418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.515 [2024-11-17 08:13:43.340459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:38.515 [2024-11-17 08:13:43.340480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.515 [2024-11-17 08:13:43.340499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:38.515 [2024-11-17 08:13:43.340905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.515 [2024-11-17 08:13:43.340939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:38.515 [2024-11-17 08:13:43.340958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:38.515 [2024-11-17 08:13:43.340975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:38.515 passed 00:13:38.515 Test: blockdev nvme passthru rw ...passed 00:13:38.515 Test: blockdev nvme passthru vendor specific ...[2024-11-17 08:13:43.342250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:38.515 [2024-11-17 08:13:43.342289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:38.515 [2024-11-17 08:13:43.342434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:38.515 [2024-11-17 08:13:43.342466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:38.515 [2024-11-17 08:13:43.342605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:38.515 [2024-11-17 08:13:43.342631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:38.515 passed 00:13:38.515 Test: blockdev nvme admin passthru ...[2024-11-17 08:13:43.342773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:38.515 [2024-11-17 08:13:43.342808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:38.515 passed 00:13:38.515 Test: blockdev copy ...passed 00:13:38.515 00:13:38.515 Run Summary: Type Total Ran Passed Failed Inactive 00:13:38.515 suites 1 1 n/a 0 0 00:13:38.515 tests 23 23 23 0 0 00:13:38.515 asserts 152 152 152 0 n/a 00:13:38.515 00:13:38.515 Elapsed time = 0.333 seconds 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:39.896 rmmod nvme_tcp 00:13:39.896 rmmod nvme_fabrics 00:13:39.896 rmmod nvme_keyring 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 70160 ']' 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 70160 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 70160 ']' 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 70160 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70160 00:13:39.896 killing process with pid 70160 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70160' 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 70160 00:13:39.896 08:13:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 70160 00:13:41.276 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:41.276 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:41.276 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:41.276 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:41.535 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:41.836 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:41.836 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.836 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.836 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.836 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:13:41.836 00:13:41.836 real 0m6.337s 00:13:41.836 user 0m22.928s 00:13:41.836 sys 0m1.638s 00:13:41.836 ************************************ 00:13:41.836 END TEST nvmf_bdevio 00:13:41.836 ************************************ 00:13:41.836 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.836 08:13:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:41.836 08:13:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:41.836 ************************************ 00:13:41.836 END TEST nvmf_target_core 00:13:41.836 ************************************ 00:13:41.836 00:13:41.836 real 2m58.907s 00:13:41.836 user 7m43.120s 00:13:41.836 sys 1m2.995s 00:13:41.836 08:13:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.836 08:13:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:41.836 08:13:46 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:41.836 08:13:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:41.836 08:13:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.836 08:13:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.836 ************************************ 00:13:41.836 START TEST nvmf_target_extra 00:13:41.836 ************************************ 00:13:41.836 08:13:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:42.131 * Looking for test storage... 00:13:42.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.131 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:42.132 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:42.132 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.132 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:42.132 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.132 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:42.132 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:42.132 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.132 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:42.132 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.132 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.132 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.132 08:13:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:42.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.132 --rc genhtml_branch_coverage=1 00:13:42.132 --rc genhtml_function_coverage=1 00:13:42.132 --rc genhtml_legend=1 00:13:42.132 --rc geninfo_all_blocks=1 00:13:42.132 --rc geninfo_unexecuted_blocks=1 00:13:42.132 00:13:42.132 ' 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:42.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.132 --rc genhtml_branch_coverage=1 00:13:42.132 --rc genhtml_function_coverage=1 00:13:42.132 --rc genhtml_legend=1 00:13:42.132 --rc geninfo_all_blocks=1 00:13:42.132 --rc geninfo_unexecuted_blocks=1 00:13:42.132 00:13:42.132 ' 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:42.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.132 --rc genhtml_branch_coverage=1 00:13:42.132 --rc genhtml_function_coverage=1 00:13:42.132 --rc genhtml_legend=1 00:13:42.132 --rc geninfo_all_blocks=1 00:13:42.132 --rc geninfo_unexecuted_blocks=1 00:13:42.132 00:13:42.132 ' 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:42.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.132 --rc genhtml_branch_coverage=1 00:13:42.132 --rc genhtml_function_coverage=1 00:13:42.132 --rc genhtml_legend=1 00:13:42.132 --rc geninfo_all_blocks=1 00:13:42.132 --rc geninfo_unexecuted_blocks=1 00:13:42.132 00:13:42.132 ' 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.132 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.132 ************************************ 00:13:42.132 START TEST nvmf_auth_target 00:13:42.132 ************************************ 00:13:42.132 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:42.394 * Looking for test storage... 00:13:42.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:42.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.394 --rc genhtml_branch_coverage=1 00:13:42.394 --rc genhtml_function_coverage=1 00:13:42.394 --rc genhtml_legend=1 00:13:42.394 --rc geninfo_all_blocks=1 00:13:42.394 --rc geninfo_unexecuted_blocks=1 00:13:42.394 00:13:42.394 ' 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:42.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.394 --rc genhtml_branch_coverage=1 00:13:42.394 --rc genhtml_function_coverage=1 00:13:42.394 --rc genhtml_legend=1 00:13:42.394 --rc geninfo_all_blocks=1 00:13:42.394 --rc geninfo_unexecuted_blocks=1 00:13:42.394 00:13:42.394 ' 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:42.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.394 --rc genhtml_branch_coverage=1 00:13:42.394 --rc genhtml_function_coverage=1 00:13:42.394 --rc genhtml_legend=1 00:13:42.394 --rc geninfo_all_blocks=1 00:13:42.394 --rc geninfo_unexecuted_blocks=1 00:13:42.394 00:13:42.394 ' 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:42.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.394 --rc genhtml_branch_coverage=1 00:13:42.394 --rc genhtml_function_coverage=1 00:13:42.394 --rc genhtml_legend=1 00:13:42.394 --rc geninfo_all_blocks=1 00:13:42.394 --rc geninfo_unexecuted_blocks=1 00:13:42.394 00:13:42.394 ' 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:42.394 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.395 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:42.395 Cannot find device "nvmf_init_br" 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:42.395 Cannot find device "nvmf_init_br2" 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:13:42.395 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:42.655 Cannot find device "nvmf_tgt_br" 00:13:42.655 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:13:42.655 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:42.655 Cannot find device "nvmf_tgt_br2" 00:13:42.655 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:13:42.655 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:42.655 Cannot find device "nvmf_init_br" 00:13:42.655 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:13:42.655 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:42.655 Cannot find device "nvmf_init_br2" 00:13:42.655 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:13:42.655 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:42.656 Cannot find device "nvmf_tgt_br" 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:42.656 Cannot find device "nvmf_tgt_br2" 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:42.656 Cannot find device "nvmf_br" 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:42.656 Cannot find device "nvmf_init_if" 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:42.656 Cannot find device "nvmf_init_if2" 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:42.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:42.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:42.656 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:42.916 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:42.917 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:42.917 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:13:42.917 00:13:42.917 --- 10.0.0.3 ping statistics --- 00:13:42.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.917 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:42.917 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:42.917 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:13:42.917 00:13:42.917 --- 10.0.0.4 ping statistics --- 00:13:42.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.917 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:42.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:42.917 00:13:42.917 --- 10.0.0.1 ping statistics --- 00:13:42.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.917 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:42.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:13:42.917 00:13:42.917 --- 10.0.0.2 ping statistics --- 00:13:42.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.917 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70550 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70550 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70550 ']' 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.917 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.857 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.857 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:43.857 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:43.857 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:43.857 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=70582 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ca80e939d55513afe2324924246df7d2d14f4fde1dba3636 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nGX 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ca80e939d55513afe2324924246df7d2d14f4fde1dba3636 0 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ca80e939d55513afe2324924246df7d2d14f4fde1dba3636 0 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ca80e939d55513afe2324924246df7d2d14f4fde1dba3636 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nGX 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nGX 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.nGX 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d125b50050d39ac6d519621b76c3896194b80be09f21fedbc2323790fb7d4653 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hdj 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d125b50050d39ac6d519621b76c3896194b80be09f21fedbc2323790fb7d4653 3 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d125b50050d39ac6d519621b76c3896194b80be09f21fedbc2323790fb7d4653 3 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d125b50050d39ac6d519621b76c3896194b80be09f21fedbc2323790fb7d4653 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:44.118 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hdj 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hdj 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.hdj 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=55b416bf314c6b3e2c67d9ebe1255d19 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cke 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 55b416bf314c6b3e2c67d9ebe1255d19 1 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 55b416bf314c6b3e2c67d9ebe1255d19 1 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=55b416bf314c6b3e2c67d9ebe1255d19 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cke 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cke 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.cke 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=272e19755382a6d8323260e9dcee6ef874565692c600ceee 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BR9 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 272e19755382a6d8323260e9dcee6ef874565692c600ceee 2 00:13:44.118 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 272e19755382a6d8323260e9dcee6ef874565692c600ceee 2 00:13:44.119 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.119 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.119 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=272e19755382a6d8323260e9dcee6ef874565692c600ceee 00:13:44.119 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:44.119 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BR9 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BR9 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.BR9 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6fb3b5e95fae84eb02909a088a2b5bb8484c69289bb7e997 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.iKZ 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6fb3b5e95fae84eb02909a088a2b5bb8484c69289bb7e997 2 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6fb3b5e95fae84eb02909a088a2b5bb8484c69289bb7e997 2 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6fb3b5e95fae84eb02909a088a2b5bb8484c69289bb7e997 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.iKZ 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.iKZ 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.iKZ 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=24a850027c825aea0b15b1f677854662 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.n0S 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 24a850027c825aea0b15b1f677854662 1 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 24a850027c825aea0b15b1f677854662 1 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=24a850027c825aea0b15b1f677854662 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.n0S 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.n0S 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.n0S 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0e1b9fbd02958586fde80bae8bc3a17114ad8dc6482ff35bf12f5832ad56a8c4 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pVV 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0e1b9fbd02958586fde80bae8bc3a17114ad8dc6482ff35bf12f5832ad56a8c4 3 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0e1b9fbd02958586fde80bae8bc3a17114ad8dc6482ff35bf12f5832ad56a8c4 3 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0e1b9fbd02958586fde80bae8bc3a17114ad8dc6482ff35bf12f5832ad56a8c4 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pVV 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pVV 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.pVV 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 70550 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70550 ']' 00:13:44.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.383 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.643 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.643 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:44.643 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 70582 /var/tmp/host.sock 00:13:44.643 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70582 ']' 00:13:44.643 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:44.643 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:44.643 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:44.643 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.643 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nGX 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.nGX 00:13:45.582 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.nGX 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.hdj ]] 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hdj 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hdj 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hdj 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cke 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.cke 00:13:45.842 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.cke 00:13:46.101 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.BR9 ]] 00:13:46.101 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BR9 00:13:46.101 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.101 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.101 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.101 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BR9 00:13:46.101 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BR9 00:13:46.360 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:46.360 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iKZ 00:13:46.360 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.360 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.360 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.360 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.iKZ 00:13:46.360 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.iKZ 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.n0S ]] 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n0S 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n0S 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n0S 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.pVV 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.pVV 00:13:46.619 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.pVV 00:13:46.878 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:46.878 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:46.878 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:46.878 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.878 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:46.878 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.137 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.396 00:13:47.396 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.396 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.397 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.656 { 00:13:47.656 "cntlid": 1, 00:13:47.656 "qid": 0, 00:13:47.656 "state": "enabled", 00:13:47.656 "thread": "nvmf_tgt_poll_group_000", 00:13:47.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:13:47.656 "listen_address": { 00:13:47.656 "trtype": "TCP", 00:13:47.656 "adrfam": "IPv4", 00:13:47.656 "traddr": "10.0.0.3", 00:13:47.656 "trsvcid": "4420" 00:13:47.656 }, 00:13:47.656 "peer_address": { 00:13:47.656 "trtype": "TCP", 00:13:47.656 "adrfam": "IPv4", 00:13:47.656 "traddr": "10.0.0.1", 00:13:47.656 "trsvcid": "56884" 00:13:47.656 }, 00:13:47.656 "auth": { 00:13:47.656 "state": "completed", 00:13:47.656 "digest": "sha256", 00:13:47.656 "dhgroup": "null" 00:13:47.656 } 00:13:47.656 } 00:13:47.656 ]' 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.656 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.915 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:13:47.915 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:13:51.207 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.207 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:51.207 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.207 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.207 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.207 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.207 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:51.207 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.466 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.725 00:13:51.725 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.725 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.725 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.985 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.985 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.985 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.985 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.985 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.985 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.985 { 00:13:51.985 "cntlid": 3, 00:13:51.985 "qid": 0, 00:13:51.985 "state": "enabled", 00:13:51.985 "thread": "nvmf_tgt_poll_group_000", 00:13:51.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:13:51.985 "listen_address": { 00:13:51.985 "trtype": "TCP", 00:13:51.985 "adrfam": "IPv4", 00:13:51.985 "traddr": "10.0.0.3", 00:13:51.985 "trsvcid": "4420" 00:13:51.985 }, 00:13:51.985 "peer_address": { 00:13:51.985 "trtype": "TCP", 00:13:51.985 "adrfam": "IPv4", 00:13:51.985 "traddr": "10.0.0.1", 00:13:51.985 "trsvcid": "56914" 00:13:51.985 }, 00:13:51.985 "auth": { 00:13:51.985 "state": "completed", 00:13:51.985 "digest": "sha256", 00:13:51.985 "dhgroup": "null" 00:13:51.985 } 00:13:51.985 } 00:13:51.985 ]' 00:13:51.985 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.985 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.985 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.245 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:52.245 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.245 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.245 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.245 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.245 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:13:52.245 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:13:52.813 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.813 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:52.813 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.813 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.074 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.074 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.074 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:53.074 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.074 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.334 00:13:53.334 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.334 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.334 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.593 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.593 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.593 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.593 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.593 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.593 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.593 { 00:13:53.593 "cntlid": 5, 00:13:53.593 "qid": 0, 00:13:53.593 "state": "enabled", 00:13:53.593 "thread": "nvmf_tgt_poll_group_000", 00:13:53.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:13:53.593 "listen_address": { 00:13:53.593 "trtype": "TCP", 00:13:53.593 "adrfam": "IPv4", 00:13:53.593 "traddr": "10.0.0.3", 00:13:53.593 "trsvcid": "4420" 00:13:53.593 }, 00:13:53.593 "peer_address": { 00:13:53.593 "trtype": "TCP", 00:13:53.593 "adrfam": "IPv4", 00:13:53.593 "traddr": "10.0.0.1", 00:13:53.593 "trsvcid": "38392" 00:13:53.593 }, 00:13:53.593 "auth": { 00:13:53.593 "state": "completed", 00:13:53.593 "digest": "sha256", 00:13:53.593 "dhgroup": "null" 00:13:53.593 } 00:13:53.593 } 00:13:53.593 ]' 00:13:53.593 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.593 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.593 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.852 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:53.852 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.852 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.852 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.852 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.112 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:13:54.112 08:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:13:54.683 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.683 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:54.683 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.683 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.683 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.683 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.683 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.683 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.943 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:55.203 00:13:55.203 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.203 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.203 08:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.203 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.203 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.203 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.203 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.463 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.463 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.463 { 00:13:55.463 "cntlid": 7, 00:13:55.463 "qid": 0, 00:13:55.463 "state": "enabled", 00:13:55.463 "thread": "nvmf_tgt_poll_group_000", 00:13:55.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:13:55.463 "listen_address": { 00:13:55.463 "trtype": "TCP", 00:13:55.463 "adrfam": "IPv4", 00:13:55.463 "traddr": "10.0.0.3", 00:13:55.463 "trsvcid": "4420" 00:13:55.463 }, 00:13:55.463 "peer_address": { 00:13:55.463 "trtype": "TCP", 00:13:55.463 "adrfam": "IPv4", 00:13:55.463 "traddr": "10.0.0.1", 00:13:55.463 "trsvcid": "38420" 00:13:55.463 }, 00:13:55.463 "auth": { 00:13:55.463 "state": "completed", 00:13:55.463 "digest": "sha256", 00:13:55.463 "dhgroup": "null" 00:13:55.463 } 00:13:55.463 } 00:13:55.463 ]' 00:13:55.463 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.463 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.463 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.463 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:55.463 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.463 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.463 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.463 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.723 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:13:55.723 08:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:13:56.292 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.292 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:56.292 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.292 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.292 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.292 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:56.292 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.292 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:56.292 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:56.551 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:56.551 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.551 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:56.551 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:56.551 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:56.551 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.551 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.551 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.551 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.552 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.552 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.552 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.552 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.811 00:13:56.811 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.811 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.811 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.071 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.071 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.071 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.071 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.071 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.071 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.071 { 00:13:57.071 "cntlid": 9, 00:13:57.071 "qid": 0, 00:13:57.071 "state": "enabled", 00:13:57.071 "thread": "nvmf_tgt_poll_group_000", 00:13:57.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:13:57.071 "listen_address": { 00:13:57.071 "trtype": "TCP", 00:13:57.071 "adrfam": "IPv4", 00:13:57.071 "traddr": "10.0.0.3", 00:13:57.071 "trsvcid": "4420" 00:13:57.071 }, 00:13:57.071 "peer_address": { 00:13:57.071 "trtype": "TCP", 00:13:57.071 "adrfam": "IPv4", 00:13:57.071 "traddr": "10.0.0.1", 00:13:57.071 "trsvcid": "38442" 00:13:57.071 }, 00:13:57.071 "auth": { 00:13:57.071 "state": "completed", 00:13:57.071 "digest": "sha256", 00:13:57.071 "dhgroup": "ffdhe2048" 00:13:57.071 } 00:13:57.071 } 00:13:57.071 ]' 00:13:57.071 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.071 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.071 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.071 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:57.071 08:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.071 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.071 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.071 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.331 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:13:57.331 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:13:57.899 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.900 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:57.900 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.900 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.900 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.900 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.900 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:57.900 08:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.159 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.419 00:13:58.419 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.419 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.419 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.678 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.678 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.678 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.678 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.678 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.678 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.678 { 00:13:58.678 "cntlid": 11, 00:13:58.678 "qid": 0, 00:13:58.678 "state": "enabled", 00:13:58.678 "thread": "nvmf_tgt_poll_group_000", 00:13:58.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:13:58.678 "listen_address": { 00:13:58.678 "trtype": "TCP", 00:13:58.678 "adrfam": "IPv4", 00:13:58.678 "traddr": "10.0.0.3", 00:13:58.678 "trsvcid": "4420" 00:13:58.678 }, 00:13:58.678 "peer_address": { 00:13:58.678 "trtype": "TCP", 00:13:58.678 "adrfam": "IPv4", 00:13:58.678 "traddr": "10.0.0.1", 00:13:58.678 "trsvcid": "38466" 00:13:58.678 }, 00:13:58.678 "auth": { 00:13:58.678 "state": "completed", 00:13:58.678 "digest": "sha256", 00:13:58.678 "dhgroup": "ffdhe2048" 00:13:58.678 } 00:13:58.678 } 00:13:58.678 ]' 00:13:58.678 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.678 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:58.678 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.679 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:58.679 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.679 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.679 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.679 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.938 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:13:58.938 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:13:59.507 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.507 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:13:59.507 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.507 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.507 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.507 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.507 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:59.507 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:59.766 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:59.766 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.766 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:59.766 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:59.766 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:59.766 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.766 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.766 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.766 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.766 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.766 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.766 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.767 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.026 00:14:00.026 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.026 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.026 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.286 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.286 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.286 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.286 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.286 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.286 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.286 { 00:14:00.286 "cntlid": 13, 00:14:00.286 "qid": 0, 00:14:00.286 "state": "enabled", 00:14:00.286 "thread": "nvmf_tgt_poll_group_000", 00:14:00.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:00.286 "listen_address": { 00:14:00.286 "trtype": "TCP", 00:14:00.286 "adrfam": "IPv4", 00:14:00.286 "traddr": "10.0.0.3", 00:14:00.286 "trsvcid": "4420" 00:14:00.286 }, 00:14:00.286 "peer_address": { 00:14:00.286 "trtype": "TCP", 00:14:00.286 "adrfam": "IPv4", 00:14:00.286 "traddr": "10.0.0.1", 00:14:00.286 "trsvcid": "38498" 00:14:00.286 }, 00:14:00.286 "auth": { 00:14:00.286 "state": "completed", 00:14:00.286 "digest": "sha256", 00:14:00.286 "dhgroup": "ffdhe2048" 00:14:00.286 } 00:14:00.286 } 00:14:00.286 ]' 00:14:00.286 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.286 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.286 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.546 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:00.546 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.546 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.546 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.546 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.546 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:00.546 08:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:01.115 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.115 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:01.115 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.115 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.115 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.115 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.115 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:01.115 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.374 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.633 00:14:01.633 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.633 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.633 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.893 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.893 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.893 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.893 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.893 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.893 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.893 { 00:14:01.893 "cntlid": 15, 00:14:01.893 "qid": 0, 00:14:01.893 "state": "enabled", 00:14:01.893 "thread": "nvmf_tgt_poll_group_000", 00:14:01.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:01.893 "listen_address": { 00:14:01.893 "trtype": "TCP", 00:14:01.893 "adrfam": "IPv4", 00:14:01.893 "traddr": "10.0.0.3", 00:14:01.893 "trsvcid": "4420" 00:14:01.893 }, 00:14:01.893 "peer_address": { 00:14:01.893 "trtype": "TCP", 00:14:01.893 "adrfam": "IPv4", 00:14:01.893 "traddr": "10.0.0.1", 00:14:01.893 "trsvcid": "38538" 00:14:01.893 }, 00:14:01.893 "auth": { 00:14:01.893 "state": "completed", 00:14:01.893 "digest": "sha256", 00:14:01.893 "dhgroup": "ffdhe2048" 00:14:01.893 } 00:14:01.893 } 00:14:01.893 ]' 00:14:01.893 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.893 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.893 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.152 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:02.152 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.152 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.152 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.152 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.411 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:02.411 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.980 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.981 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.981 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.239 00:14:03.499 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.499 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.499 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.499 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.499 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.499 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.499 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.499 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.499 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.499 { 00:14:03.499 "cntlid": 17, 00:14:03.499 "qid": 0, 00:14:03.499 "state": "enabled", 00:14:03.499 "thread": "nvmf_tgt_poll_group_000", 00:14:03.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:03.499 "listen_address": { 00:14:03.499 "trtype": "TCP", 00:14:03.499 "adrfam": "IPv4", 00:14:03.499 "traddr": "10.0.0.3", 00:14:03.499 "trsvcid": "4420" 00:14:03.499 }, 00:14:03.499 "peer_address": { 00:14:03.499 "trtype": "TCP", 00:14:03.499 "adrfam": "IPv4", 00:14:03.499 "traddr": "10.0.0.1", 00:14:03.499 "trsvcid": "39470" 00:14:03.499 }, 00:14:03.499 "auth": { 00:14:03.499 "state": "completed", 00:14:03.499 "digest": "sha256", 00:14:03.499 "dhgroup": "ffdhe3072" 00:14:03.499 } 00:14:03.499 } 00:14:03.499 ]' 00:14:03.499 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.758 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.758 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.758 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:03.758 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.758 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.758 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.758 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.018 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:04.018 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.606 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.174 00:14:05.174 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.174 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.174 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.174 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.174 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.174 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.174 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.174 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.174 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.174 { 00:14:05.174 "cntlid": 19, 00:14:05.174 "qid": 0, 00:14:05.174 "state": "enabled", 00:14:05.174 "thread": "nvmf_tgt_poll_group_000", 00:14:05.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:05.174 "listen_address": { 00:14:05.174 "trtype": "TCP", 00:14:05.174 "adrfam": "IPv4", 00:14:05.174 "traddr": "10.0.0.3", 00:14:05.174 "trsvcid": "4420" 00:14:05.174 }, 00:14:05.174 "peer_address": { 00:14:05.174 "trtype": "TCP", 00:14:05.174 "adrfam": "IPv4", 00:14:05.174 "traddr": "10.0.0.1", 00:14:05.174 "trsvcid": "39500" 00:14:05.174 }, 00:14:05.174 "auth": { 00:14:05.174 "state": "completed", 00:14:05.174 "digest": "sha256", 00:14:05.174 "dhgroup": "ffdhe3072" 00:14:05.174 } 00:14:05.174 } 00:14:05.174 ]' 00:14:05.174 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.433 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.433 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.433 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:05.433 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.433 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.433 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.433 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.693 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:05.693 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.267 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.531 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.531 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.531 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.531 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.790 00:14:06.790 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.790 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.790 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.790 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.049 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.049 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.049 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.049 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.049 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.049 { 00:14:07.049 "cntlid": 21, 00:14:07.049 "qid": 0, 00:14:07.049 "state": "enabled", 00:14:07.049 "thread": "nvmf_tgt_poll_group_000", 00:14:07.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:07.049 "listen_address": { 00:14:07.049 "trtype": "TCP", 00:14:07.049 "adrfam": "IPv4", 00:14:07.049 "traddr": "10.0.0.3", 00:14:07.049 "trsvcid": "4420" 00:14:07.049 }, 00:14:07.049 "peer_address": { 00:14:07.049 "trtype": "TCP", 00:14:07.049 "adrfam": "IPv4", 00:14:07.049 "traddr": "10.0.0.1", 00:14:07.049 "trsvcid": "39532" 00:14:07.049 }, 00:14:07.049 "auth": { 00:14:07.049 "state": "completed", 00:14:07.049 "digest": "sha256", 00:14:07.049 "dhgroup": "ffdhe3072" 00:14:07.049 } 00:14:07.049 } 00:14:07.049 ]' 00:14:07.049 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.049 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.049 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.049 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:07.049 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.049 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.049 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.050 08:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:07.309 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:07.877 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.877 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:07.877 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.877 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.877 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.877 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.877 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:07.877 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.137 08:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.396 00:14:08.396 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.396 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.396 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.655 { 00:14:08.655 "cntlid": 23, 00:14:08.655 "qid": 0, 00:14:08.655 "state": "enabled", 00:14:08.655 "thread": "nvmf_tgt_poll_group_000", 00:14:08.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:08.655 "listen_address": { 00:14:08.655 "trtype": "TCP", 00:14:08.655 "adrfam": "IPv4", 00:14:08.655 "traddr": "10.0.0.3", 00:14:08.655 "trsvcid": "4420" 00:14:08.655 }, 00:14:08.655 "peer_address": { 00:14:08.655 "trtype": "TCP", 00:14:08.655 "adrfam": "IPv4", 00:14:08.655 "traddr": "10.0.0.1", 00:14:08.655 "trsvcid": "39568" 00:14:08.655 }, 00:14:08.655 "auth": { 00:14:08.655 "state": "completed", 00:14:08.655 "digest": "sha256", 00:14:08.655 "dhgroup": "ffdhe3072" 00:14:08.655 } 00:14:08.655 } 00:14:08.655 ]' 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.655 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.656 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.915 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:08.915 08:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:09.486 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.486 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:09.486 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.486 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.486 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.486 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:09.486 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.486 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:09.486 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.779 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.074 00:14:10.074 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.074 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.074 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.334 { 00:14:10.334 "cntlid": 25, 00:14:10.334 "qid": 0, 00:14:10.334 "state": "enabled", 00:14:10.334 "thread": "nvmf_tgt_poll_group_000", 00:14:10.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:10.334 "listen_address": { 00:14:10.334 "trtype": "TCP", 00:14:10.334 "adrfam": "IPv4", 00:14:10.334 "traddr": "10.0.0.3", 00:14:10.334 "trsvcid": "4420" 00:14:10.334 }, 00:14:10.334 "peer_address": { 00:14:10.334 "trtype": "TCP", 00:14:10.334 "adrfam": "IPv4", 00:14:10.334 "traddr": "10.0.0.1", 00:14:10.334 "trsvcid": "39590" 00:14:10.334 }, 00:14:10.334 "auth": { 00:14:10.334 "state": "completed", 00:14:10.334 "digest": "sha256", 00:14:10.334 "dhgroup": "ffdhe4096" 00:14:10.334 } 00:14:10.334 } 00:14:10.334 ]' 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.334 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.593 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:10.593 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:11.161 08:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.161 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:11.161 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.161 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.161 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.161 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.161 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:11.161 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:11.420 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:11.420 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.420 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:11.420 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:11.420 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:11.421 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.421 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.421 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.421 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.421 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.421 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.421 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.421 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.680 00:14:11.680 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.680 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.680 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.939 { 00:14:11.939 "cntlid": 27, 00:14:11.939 "qid": 0, 00:14:11.939 "state": "enabled", 00:14:11.939 "thread": "nvmf_tgt_poll_group_000", 00:14:11.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:11.939 "listen_address": { 00:14:11.939 "trtype": "TCP", 00:14:11.939 "adrfam": "IPv4", 00:14:11.939 "traddr": "10.0.0.3", 00:14:11.939 "trsvcid": "4420" 00:14:11.939 }, 00:14:11.939 "peer_address": { 00:14:11.939 "trtype": "TCP", 00:14:11.939 "adrfam": "IPv4", 00:14:11.939 "traddr": "10.0.0.1", 00:14:11.939 "trsvcid": "39606" 00:14:11.939 }, 00:14:11.939 "auth": { 00:14:11.939 "state": "completed", 00:14:11.939 "digest": "sha256", 00:14:11.939 "dhgroup": "ffdhe4096" 00:14:11.939 } 00:14:11.939 } 00:14:11.939 ]' 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.939 08:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.199 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:12.199 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:12.767 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.767 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:12.767 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.767 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.767 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.767 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.767 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:12.767 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.026 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.286 00:14:13.286 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.286 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.286 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.546 { 00:14:13.546 "cntlid": 29, 00:14:13.546 "qid": 0, 00:14:13.546 "state": "enabled", 00:14:13.546 "thread": "nvmf_tgt_poll_group_000", 00:14:13.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:13.546 "listen_address": { 00:14:13.546 "trtype": "TCP", 00:14:13.546 "adrfam": "IPv4", 00:14:13.546 "traddr": "10.0.0.3", 00:14:13.546 "trsvcid": "4420" 00:14:13.546 }, 00:14:13.546 "peer_address": { 00:14:13.546 "trtype": "TCP", 00:14:13.546 "adrfam": "IPv4", 00:14:13.546 "traddr": "10.0.0.1", 00:14:13.546 "trsvcid": "48336" 00:14:13.546 }, 00:14:13.546 "auth": { 00:14:13.546 "state": "completed", 00:14:13.546 "digest": "sha256", 00:14:13.546 "dhgroup": "ffdhe4096" 00:14:13.546 } 00:14:13.546 } 00:14:13.546 ]' 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.546 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.805 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:13.805 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:14.383 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.383 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:14.383 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.383 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.383 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.383 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.383 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:14.383 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:14.642 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:14.902 00:14:14.902 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.902 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.902 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.161 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.161 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.161 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.161 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.161 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.161 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.161 { 00:14:15.161 "cntlid": 31, 00:14:15.161 "qid": 0, 00:14:15.161 "state": "enabled", 00:14:15.161 "thread": "nvmf_tgt_poll_group_000", 00:14:15.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:15.161 "listen_address": { 00:14:15.161 "trtype": "TCP", 00:14:15.161 "adrfam": "IPv4", 00:14:15.161 "traddr": "10.0.0.3", 00:14:15.161 "trsvcid": "4420" 00:14:15.161 }, 00:14:15.161 "peer_address": { 00:14:15.161 "trtype": "TCP", 00:14:15.161 "adrfam": "IPv4", 00:14:15.161 "traddr": "10.0.0.1", 00:14:15.161 "trsvcid": "48372" 00:14:15.161 }, 00:14:15.161 "auth": { 00:14:15.161 "state": "completed", 00:14:15.161 "digest": "sha256", 00:14:15.161 "dhgroup": "ffdhe4096" 00:14:15.161 } 00:14:15.161 } 00:14:15.161 ]' 00:14:15.161 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.161 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.161 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.421 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:15.421 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.421 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.421 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.421 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.680 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:15.680 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:16.248 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.248 08:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.248 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.249 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.817 00:14:16.817 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.817 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.817 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.077 { 00:14:17.077 "cntlid": 33, 00:14:17.077 "qid": 0, 00:14:17.077 "state": "enabled", 00:14:17.077 "thread": "nvmf_tgt_poll_group_000", 00:14:17.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:17.077 "listen_address": { 00:14:17.077 "trtype": "TCP", 00:14:17.077 "adrfam": "IPv4", 00:14:17.077 "traddr": "10.0.0.3", 00:14:17.077 "trsvcid": "4420" 00:14:17.077 }, 00:14:17.077 "peer_address": { 00:14:17.077 "trtype": "TCP", 00:14:17.077 "adrfam": "IPv4", 00:14:17.077 "traddr": "10.0.0.1", 00:14:17.077 "trsvcid": "48402" 00:14:17.077 }, 00:14:17.077 "auth": { 00:14:17.077 "state": "completed", 00:14:17.077 "digest": "sha256", 00:14:17.077 "dhgroup": "ffdhe6144" 00:14:17.077 } 00:14:17.077 } 00:14:17.077 ]' 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.077 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.336 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:17.336 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:17.903 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.903 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:17.903 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.903 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.903 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.903 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.903 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:17.903 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:18.162 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:18.162 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.163 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:18.163 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:18.163 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:18.163 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.163 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.163 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.163 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.163 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.163 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.163 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.163 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.422 00:14:18.422 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.422 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.422 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.681 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.681 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.681 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.681 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.681 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.681 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.681 { 00:14:18.681 "cntlid": 35, 00:14:18.681 "qid": 0, 00:14:18.681 "state": "enabled", 00:14:18.681 "thread": "nvmf_tgt_poll_group_000", 00:14:18.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:18.681 "listen_address": { 00:14:18.681 "trtype": "TCP", 00:14:18.681 "adrfam": "IPv4", 00:14:18.681 "traddr": "10.0.0.3", 00:14:18.681 "trsvcid": "4420" 00:14:18.681 }, 00:14:18.681 "peer_address": { 00:14:18.681 "trtype": "TCP", 00:14:18.681 "adrfam": "IPv4", 00:14:18.681 "traddr": "10.0.0.1", 00:14:18.681 "trsvcid": "48428" 00:14:18.681 }, 00:14:18.681 "auth": { 00:14:18.681 "state": "completed", 00:14:18.681 "digest": "sha256", 00:14:18.681 "dhgroup": "ffdhe6144" 00:14:18.681 } 00:14:18.681 } 00:14:18.681 ]' 00:14:18.681 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.681 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.681 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.681 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:18.681 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.940 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.940 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.940 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.940 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:18.940 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:19.507 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.507 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:19.507 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.507 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.507 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.507 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.507 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:19.507 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.766 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.335 00:14:20.335 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.335 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.335 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.335 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.335 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.335 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.335 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.335 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.335 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.335 { 00:14:20.335 "cntlid": 37, 00:14:20.335 "qid": 0, 00:14:20.335 "state": "enabled", 00:14:20.335 "thread": "nvmf_tgt_poll_group_000", 00:14:20.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:20.335 "listen_address": { 00:14:20.335 "trtype": "TCP", 00:14:20.335 "adrfam": "IPv4", 00:14:20.335 "traddr": "10.0.0.3", 00:14:20.335 "trsvcid": "4420" 00:14:20.335 }, 00:14:20.335 "peer_address": { 00:14:20.335 "trtype": "TCP", 00:14:20.335 "adrfam": "IPv4", 00:14:20.335 "traddr": "10.0.0.1", 00:14:20.335 "trsvcid": "48444" 00:14:20.335 }, 00:14:20.335 "auth": { 00:14:20.335 "state": "completed", 00:14:20.335 "digest": "sha256", 00:14:20.335 "dhgroup": "ffdhe6144" 00:14:20.335 } 00:14:20.335 } 00:14:20.335 ]' 00:14:20.335 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.594 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.594 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.594 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:20.594 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.594 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.594 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.594 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.852 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:20.852 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:21.421 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.421 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:21.421 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.421 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.421 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.421 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.421 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:21.421 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.680 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.940 00:14:21.940 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.940 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.940 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.199 { 00:14:22.199 "cntlid": 39, 00:14:22.199 "qid": 0, 00:14:22.199 "state": "enabled", 00:14:22.199 "thread": "nvmf_tgt_poll_group_000", 00:14:22.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:22.199 "listen_address": { 00:14:22.199 "trtype": "TCP", 00:14:22.199 "adrfam": "IPv4", 00:14:22.199 "traddr": "10.0.0.3", 00:14:22.199 "trsvcid": "4420" 00:14:22.199 }, 00:14:22.199 "peer_address": { 00:14:22.199 "trtype": "TCP", 00:14:22.199 "adrfam": "IPv4", 00:14:22.199 "traddr": "10.0.0.1", 00:14:22.199 "trsvcid": "48472" 00:14:22.199 }, 00:14:22.199 "auth": { 00:14:22.199 "state": "completed", 00:14:22.199 "digest": "sha256", 00:14:22.199 "dhgroup": "ffdhe6144" 00:14:22.199 } 00:14:22.199 } 00:14:22.199 ]' 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.199 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.458 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:22.458 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:23.026 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.026 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:23.026 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.026 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.026 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.026 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.026 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.026 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:23.026 08:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:23.285 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:23.285 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.285 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.285 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:23.285 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:23.285 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.285 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.285 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.285 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.285 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.286 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.286 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.286 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.855 00:14:23.855 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.855 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.855 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.114 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.114 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.114 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.114 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.114 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.114 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.114 { 00:14:24.114 "cntlid": 41, 00:14:24.114 "qid": 0, 00:14:24.114 "state": "enabled", 00:14:24.115 "thread": "nvmf_tgt_poll_group_000", 00:14:24.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:24.115 "listen_address": { 00:14:24.115 "trtype": "TCP", 00:14:24.115 "adrfam": "IPv4", 00:14:24.115 "traddr": "10.0.0.3", 00:14:24.115 "trsvcid": "4420" 00:14:24.115 }, 00:14:24.115 "peer_address": { 00:14:24.115 "trtype": "TCP", 00:14:24.115 "adrfam": "IPv4", 00:14:24.115 "traddr": "10.0.0.1", 00:14:24.115 "trsvcid": "37108" 00:14:24.115 }, 00:14:24.115 "auth": { 00:14:24.115 "state": "completed", 00:14:24.115 "digest": "sha256", 00:14:24.115 "dhgroup": "ffdhe8192" 00:14:24.115 } 00:14:24.115 } 00:14:24.115 ]' 00:14:24.115 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.115 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.115 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.115 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:24.115 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.115 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.115 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.115 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.375 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:24.375 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:24.944 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.944 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:24.944 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.944 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.944 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.944 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.944 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:24.944 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.203 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.772 00:14:25.772 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.772 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.772 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.031 { 00:14:26.031 "cntlid": 43, 00:14:26.031 "qid": 0, 00:14:26.031 "state": "enabled", 00:14:26.031 "thread": "nvmf_tgt_poll_group_000", 00:14:26.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:26.031 "listen_address": { 00:14:26.031 "trtype": "TCP", 00:14:26.031 "adrfam": "IPv4", 00:14:26.031 "traddr": "10.0.0.3", 00:14:26.031 "trsvcid": "4420" 00:14:26.031 }, 00:14:26.031 "peer_address": { 00:14:26.031 "trtype": "TCP", 00:14:26.031 "adrfam": "IPv4", 00:14:26.031 "traddr": "10.0.0.1", 00:14:26.031 "trsvcid": "37136" 00:14:26.031 }, 00:14:26.031 "auth": { 00:14:26.031 "state": "completed", 00:14:26.031 "digest": "sha256", 00:14:26.031 "dhgroup": "ffdhe8192" 00:14:26.031 } 00:14:26.031 } 00:14:26.031 ]' 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.031 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.290 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:26.290 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:26.858 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.858 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:26.858 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.858 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.858 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.858 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.858 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:26.858 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:27.117 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:27.117 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.117 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.118 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:27.118 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:27.118 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.118 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.118 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.118 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.118 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.118 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.118 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.118 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.686 00:14:27.686 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.686 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.686 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.686 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.686 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.686 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.686 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.686 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.686 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.686 { 00:14:27.686 "cntlid": 45, 00:14:27.686 "qid": 0, 00:14:27.686 "state": "enabled", 00:14:27.686 "thread": "nvmf_tgt_poll_group_000", 00:14:27.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:27.686 "listen_address": { 00:14:27.686 "trtype": "TCP", 00:14:27.686 "adrfam": "IPv4", 00:14:27.686 "traddr": "10.0.0.3", 00:14:27.686 "trsvcid": "4420" 00:14:27.686 }, 00:14:27.686 "peer_address": { 00:14:27.686 "trtype": "TCP", 00:14:27.686 "adrfam": "IPv4", 00:14:27.686 "traddr": "10.0.0.1", 00:14:27.686 "trsvcid": "37156" 00:14:27.686 }, 00:14:27.686 "auth": { 00:14:27.686 "state": "completed", 00:14:27.686 "digest": "sha256", 00:14:27.686 "dhgroup": "ffdhe8192" 00:14:27.686 } 00:14:27.686 } 00:14:27.686 ]' 00:14:27.686 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.945 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.945 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.945 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:27.945 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.945 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.945 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.945 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.204 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:28.204 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:28.837 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.406 00:14:29.406 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.406 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.406 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.665 { 00:14:29.665 "cntlid": 47, 00:14:29.665 "qid": 0, 00:14:29.665 "state": "enabled", 00:14:29.665 "thread": "nvmf_tgt_poll_group_000", 00:14:29.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:29.665 "listen_address": { 00:14:29.665 "trtype": "TCP", 00:14:29.665 "adrfam": "IPv4", 00:14:29.665 "traddr": "10.0.0.3", 00:14:29.665 "trsvcid": "4420" 00:14:29.665 }, 00:14:29.665 "peer_address": { 00:14:29.665 "trtype": "TCP", 00:14:29.665 "adrfam": "IPv4", 00:14:29.665 "traddr": "10.0.0.1", 00:14:29.665 "trsvcid": "37178" 00:14:29.665 }, 00:14:29.665 "auth": { 00:14:29.665 "state": "completed", 00:14:29.665 "digest": "sha256", 00:14:29.665 "dhgroup": "ffdhe8192" 00:14:29.665 } 00:14:29.665 } 00:14:29.665 ]' 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.665 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.924 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:29.924 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:30.493 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.493 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:30.493 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.493 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.493 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.493 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:30.493 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.493 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.493 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:30.493 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.752 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.012 00:14:31.012 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.012 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.012 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.271 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.271 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.271 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.271 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.271 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.271 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.271 { 00:14:31.271 "cntlid": 49, 00:14:31.271 "qid": 0, 00:14:31.271 "state": "enabled", 00:14:31.271 "thread": "nvmf_tgt_poll_group_000", 00:14:31.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:31.271 "listen_address": { 00:14:31.271 "trtype": "TCP", 00:14:31.271 "adrfam": "IPv4", 00:14:31.271 "traddr": "10.0.0.3", 00:14:31.271 "trsvcid": "4420" 00:14:31.271 }, 00:14:31.271 "peer_address": { 00:14:31.271 "trtype": "TCP", 00:14:31.271 "adrfam": "IPv4", 00:14:31.271 "traddr": "10.0.0.1", 00:14:31.271 "trsvcid": "37210" 00:14:31.271 }, 00:14:31.271 "auth": { 00:14:31.271 "state": "completed", 00:14:31.271 "digest": "sha384", 00:14:31.271 "dhgroup": "null" 00:14:31.271 } 00:14:31.271 } 00:14:31.271 ]' 00:14:31.271 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.271 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:31.271 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.271 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:31.272 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.272 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.272 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.272 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.531 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:31.531 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:32.100 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.100 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:32.100 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.100 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.100 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.100 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.100 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:32.100 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.359 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.629 00:14:32.629 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.629 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.629 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.899 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.899 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.899 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.899 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.899 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.899 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.899 { 00:14:32.899 "cntlid": 51, 00:14:32.899 "qid": 0, 00:14:32.899 "state": "enabled", 00:14:32.899 "thread": "nvmf_tgt_poll_group_000", 00:14:32.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:32.899 "listen_address": { 00:14:32.899 "trtype": "TCP", 00:14:32.899 "adrfam": "IPv4", 00:14:32.899 "traddr": "10.0.0.3", 00:14:32.899 "trsvcid": "4420" 00:14:32.899 }, 00:14:32.899 "peer_address": { 00:14:32.899 "trtype": "TCP", 00:14:32.899 "adrfam": "IPv4", 00:14:32.899 "traddr": "10.0.0.1", 00:14:32.899 "trsvcid": "58220" 00:14:32.899 }, 00:14:32.899 "auth": { 00:14:32.899 "state": "completed", 00:14:32.899 "digest": "sha384", 00:14:32.899 "dhgroup": "null" 00:14:32.899 } 00:14:32.899 } 00:14:32.899 ]' 00:14:32.899 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.899 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:32.899 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.899 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:32.899 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.158 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.158 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.158 08:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.158 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:33.158 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:33.726 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.726 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:33.726 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.726 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.726 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.726 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.726 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:33.726 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.985 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.244 00:14:34.244 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.244 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.244 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.503 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.503 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.503 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.503 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.503 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.503 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.503 { 00:14:34.503 "cntlid": 53, 00:14:34.503 "qid": 0, 00:14:34.503 "state": "enabled", 00:14:34.503 "thread": "nvmf_tgt_poll_group_000", 00:14:34.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:34.503 "listen_address": { 00:14:34.503 "trtype": "TCP", 00:14:34.503 "adrfam": "IPv4", 00:14:34.503 "traddr": "10.0.0.3", 00:14:34.503 "trsvcid": "4420" 00:14:34.503 }, 00:14:34.503 "peer_address": { 00:14:34.503 "trtype": "TCP", 00:14:34.503 "adrfam": "IPv4", 00:14:34.503 "traddr": "10.0.0.1", 00:14:34.503 "trsvcid": "58246" 00:14:34.503 }, 00:14:34.503 "auth": { 00:14:34.503 "state": "completed", 00:14:34.503 "digest": "sha384", 00:14:34.503 "dhgroup": "null" 00:14:34.503 } 00:14:34.503 } 00:14:34.503 ]' 00:14:34.503 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.503 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.503 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.503 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:34.503 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.761 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.761 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.761 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.761 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:34.761 08:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:35.329 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.329 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:35.329 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.329 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.329 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.329 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.329 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:35.329 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.588 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.849 00:14:35.849 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.849 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.849 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.125 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.125 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.125 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.125 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.125 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.125 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.125 { 00:14:36.125 "cntlid": 55, 00:14:36.125 "qid": 0, 00:14:36.125 "state": "enabled", 00:14:36.125 "thread": "nvmf_tgt_poll_group_000", 00:14:36.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:36.125 "listen_address": { 00:14:36.125 "trtype": "TCP", 00:14:36.125 "adrfam": "IPv4", 00:14:36.125 "traddr": "10.0.0.3", 00:14:36.125 "trsvcid": "4420" 00:14:36.125 }, 00:14:36.125 "peer_address": { 00:14:36.125 "trtype": "TCP", 00:14:36.125 "adrfam": "IPv4", 00:14:36.125 "traddr": "10.0.0.1", 00:14:36.125 "trsvcid": "58258" 00:14:36.125 }, 00:14:36.125 "auth": { 00:14:36.125 "state": "completed", 00:14:36.125 "digest": "sha384", 00:14:36.125 "dhgroup": "null" 00:14:36.125 } 00:14:36.125 } 00:14:36.125 ]' 00:14:36.125 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.125 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:36.125 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.396 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:36.396 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.396 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.396 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.396 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.396 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:36.396 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:36.965 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.965 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:36.965 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.965 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.965 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.965 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.965 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.965 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:36.965 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.225 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.484 00:14:37.484 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.484 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.484 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.743 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.743 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.743 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.743 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.743 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.743 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.743 { 00:14:37.743 "cntlid": 57, 00:14:37.743 "qid": 0, 00:14:37.743 "state": "enabled", 00:14:37.743 "thread": "nvmf_tgt_poll_group_000", 00:14:37.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:37.743 "listen_address": { 00:14:37.743 "trtype": "TCP", 00:14:37.743 "adrfam": "IPv4", 00:14:37.743 "traddr": "10.0.0.3", 00:14:37.743 "trsvcid": "4420" 00:14:37.743 }, 00:14:37.743 "peer_address": { 00:14:37.743 "trtype": "TCP", 00:14:37.743 "adrfam": "IPv4", 00:14:37.743 "traddr": "10.0.0.1", 00:14:37.743 "trsvcid": "58290" 00:14:37.743 }, 00:14:37.743 "auth": { 00:14:37.743 "state": "completed", 00:14:37.743 "digest": "sha384", 00:14:37.743 "dhgroup": "ffdhe2048" 00:14:37.743 } 00:14:37.743 } 00:14:37.743 ]' 00:14:37.743 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.743 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:37.743 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.002 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:38.002 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.002 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.002 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.002 08:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.261 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:38.261 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.830 08:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.089 00:14:39.348 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.348 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.348 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.348 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.348 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.348 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.348 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.348 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.348 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.348 { 00:14:39.348 "cntlid": 59, 00:14:39.348 "qid": 0, 00:14:39.348 "state": "enabled", 00:14:39.348 "thread": "nvmf_tgt_poll_group_000", 00:14:39.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:39.348 "listen_address": { 00:14:39.348 "trtype": "TCP", 00:14:39.348 "adrfam": "IPv4", 00:14:39.348 "traddr": "10.0.0.3", 00:14:39.348 "trsvcid": "4420" 00:14:39.348 }, 00:14:39.348 "peer_address": { 00:14:39.348 "trtype": "TCP", 00:14:39.348 "adrfam": "IPv4", 00:14:39.348 "traddr": "10.0.0.1", 00:14:39.348 "trsvcid": "58324" 00:14:39.348 }, 00:14:39.348 "auth": { 00:14:39.348 "state": "completed", 00:14:39.348 "digest": "sha384", 00:14:39.348 "dhgroup": "ffdhe2048" 00:14:39.348 } 00:14:39.348 } 00:14:39.348 ]' 00:14:39.348 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.607 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.607 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.607 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:39.607 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.607 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.607 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.607 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.866 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:39.866 08:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:40.435 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.435 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:40.435 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.435 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.435 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.435 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.435 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:40.436 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:40.436 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:40.436 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.436 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:40.436 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:40.436 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:40.436 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.436 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.436 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.436 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.695 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.695 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.695 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.695 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.695 00:14:40.954 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.955 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.955 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.955 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.955 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.955 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.955 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.955 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.955 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.955 { 00:14:40.955 "cntlid": 61, 00:14:40.955 "qid": 0, 00:14:40.955 "state": "enabled", 00:14:40.955 "thread": "nvmf_tgt_poll_group_000", 00:14:40.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:40.955 "listen_address": { 00:14:40.955 "trtype": "TCP", 00:14:40.955 "adrfam": "IPv4", 00:14:40.955 "traddr": "10.0.0.3", 00:14:40.955 "trsvcid": "4420" 00:14:40.955 }, 00:14:40.955 "peer_address": { 00:14:40.955 "trtype": "TCP", 00:14:40.955 "adrfam": "IPv4", 00:14:40.955 "traddr": "10.0.0.1", 00:14:40.955 "trsvcid": "58352" 00:14:40.955 }, 00:14:40.955 "auth": { 00:14:40.955 "state": "completed", 00:14:40.955 "digest": "sha384", 00:14:40.955 "dhgroup": "ffdhe2048" 00:14:40.955 } 00:14:40.955 } 00:14:40.955 ]' 00:14:40.955 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.214 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.214 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.214 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:41.214 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.214 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.214 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.214 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.473 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:41.473 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:42.042 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.042 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:42.042 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.042 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.042 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.042 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.042 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:42.042 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.301 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.560 00:14:42.560 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.560 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.560 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.560 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.560 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.560 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.560 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.820 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.820 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.820 { 00:14:42.820 "cntlid": 63, 00:14:42.820 "qid": 0, 00:14:42.820 "state": "enabled", 00:14:42.820 "thread": "nvmf_tgt_poll_group_000", 00:14:42.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:42.820 "listen_address": { 00:14:42.820 "trtype": "TCP", 00:14:42.820 "adrfam": "IPv4", 00:14:42.820 "traddr": "10.0.0.3", 00:14:42.820 "trsvcid": "4420" 00:14:42.820 }, 00:14:42.820 "peer_address": { 00:14:42.820 "trtype": "TCP", 00:14:42.820 "adrfam": "IPv4", 00:14:42.820 "traddr": "10.0.0.1", 00:14:42.820 "trsvcid": "58224" 00:14:42.820 }, 00:14:42.820 "auth": { 00:14:42.820 "state": "completed", 00:14:42.820 "digest": "sha384", 00:14:42.820 "dhgroup": "ffdhe2048" 00:14:42.820 } 00:14:42.820 } 00:14:42.820 ]' 00:14:42.820 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.820 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:42.820 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.820 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:42.820 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.820 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.820 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.820 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.079 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:43.079 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:43.648 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.648 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:43.648 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.648 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.648 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.648 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:43.648 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.648 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:43.648 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.908 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.167 00:14:44.167 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.167 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.167 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.425 { 00:14:44.425 "cntlid": 65, 00:14:44.425 "qid": 0, 00:14:44.425 "state": "enabled", 00:14:44.425 "thread": "nvmf_tgt_poll_group_000", 00:14:44.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:44.425 "listen_address": { 00:14:44.425 "trtype": "TCP", 00:14:44.425 "adrfam": "IPv4", 00:14:44.425 "traddr": "10.0.0.3", 00:14:44.425 "trsvcid": "4420" 00:14:44.425 }, 00:14:44.425 "peer_address": { 00:14:44.425 "trtype": "TCP", 00:14:44.425 "adrfam": "IPv4", 00:14:44.425 "traddr": "10.0.0.1", 00:14:44.425 "trsvcid": "58260" 00:14:44.425 }, 00:14:44.425 "auth": { 00:14:44.425 "state": "completed", 00:14:44.425 "digest": "sha384", 00:14:44.425 "dhgroup": "ffdhe3072" 00:14:44.425 } 00:14:44.425 } 00:14:44.425 ]' 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.425 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.684 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:44.684 08:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:45.252 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.252 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:45.252 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.252 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.252 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.252 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.252 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:45.252 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.511 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.771 00:14:45.771 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.771 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.771 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.030 { 00:14:46.030 "cntlid": 67, 00:14:46.030 "qid": 0, 00:14:46.030 "state": "enabled", 00:14:46.030 "thread": "nvmf_tgt_poll_group_000", 00:14:46.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:46.030 "listen_address": { 00:14:46.030 "trtype": "TCP", 00:14:46.030 "adrfam": "IPv4", 00:14:46.030 "traddr": "10.0.0.3", 00:14:46.030 "trsvcid": "4420" 00:14:46.030 }, 00:14:46.030 "peer_address": { 00:14:46.030 "trtype": "TCP", 00:14:46.030 "adrfam": "IPv4", 00:14:46.030 "traddr": "10.0.0.1", 00:14:46.030 "trsvcid": "58286" 00:14:46.030 }, 00:14:46.030 "auth": { 00:14:46.030 "state": "completed", 00:14:46.030 "digest": "sha384", 00:14:46.030 "dhgroup": "ffdhe3072" 00:14:46.030 } 00:14:46.030 } 00:14:46.030 ]' 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.030 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.289 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:46.289 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:46.857 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.858 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:46.858 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.858 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.858 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.858 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.858 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:46.858 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.117 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.376 00:14:47.376 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.376 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.376 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.636 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.636 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.636 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.636 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.636 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.636 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.636 { 00:14:47.636 "cntlid": 69, 00:14:47.636 "qid": 0, 00:14:47.636 "state": "enabled", 00:14:47.636 "thread": "nvmf_tgt_poll_group_000", 00:14:47.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:47.636 "listen_address": { 00:14:47.636 "trtype": "TCP", 00:14:47.636 "adrfam": "IPv4", 00:14:47.636 "traddr": "10.0.0.3", 00:14:47.636 "trsvcid": "4420" 00:14:47.636 }, 00:14:47.636 "peer_address": { 00:14:47.636 "trtype": "TCP", 00:14:47.636 "adrfam": "IPv4", 00:14:47.636 "traddr": "10.0.0.1", 00:14:47.636 "trsvcid": "58306" 00:14:47.636 }, 00:14:47.636 "auth": { 00:14:47.636 "state": "completed", 00:14:47.636 "digest": "sha384", 00:14:47.636 "dhgroup": "ffdhe3072" 00:14:47.636 } 00:14:47.636 } 00:14:47.636 ]' 00:14:47.636 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.636 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.636 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.636 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:47.636 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.895 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.895 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.895 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.895 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:47.895 08:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:48.834 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.093 00:14:49.093 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.093 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.093 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.353 { 00:14:49.353 "cntlid": 71, 00:14:49.353 "qid": 0, 00:14:49.353 "state": "enabled", 00:14:49.353 "thread": "nvmf_tgt_poll_group_000", 00:14:49.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:49.353 "listen_address": { 00:14:49.353 "trtype": "TCP", 00:14:49.353 "adrfam": "IPv4", 00:14:49.353 "traddr": "10.0.0.3", 00:14:49.353 "trsvcid": "4420" 00:14:49.353 }, 00:14:49.353 "peer_address": { 00:14:49.353 "trtype": "TCP", 00:14:49.353 "adrfam": "IPv4", 00:14:49.353 "traddr": "10.0.0.1", 00:14:49.353 "trsvcid": "58328" 00:14:49.353 }, 00:14:49.353 "auth": { 00:14:49.353 "state": "completed", 00:14:49.353 "digest": "sha384", 00:14:49.353 "dhgroup": "ffdhe3072" 00:14:49.353 } 00:14:49.353 } 00:14:49.353 ]' 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.353 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.612 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:49.612 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:50.182 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.182 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:50.182 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.182 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.182 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.182 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:50.182 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.182 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:50.182 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:50.441 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:50.441 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.441 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:50.441 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:50.441 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:50.441 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.441 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.441 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.441 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.441 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.441 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.441 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.442 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.700 00:14:50.700 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.700 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.700 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.960 { 00:14:50.960 "cntlid": 73, 00:14:50.960 "qid": 0, 00:14:50.960 "state": "enabled", 00:14:50.960 "thread": "nvmf_tgt_poll_group_000", 00:14:50.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:50.960 "listen_address": { 00:14:50.960 "trtype": "TCP", 00:14:50.960 "adrfam": "IPv4", 00:14:50.960 "traddr": "10.0.0.3", 00:14:50.960 "trsvcid": "4420" 00:14:50.960 }, 00:14:50.960 "peer_address": { 00:14:50.960 "trtype": "TCP", 00:14:50.960 "adrfam": "IPv4", 00:14:50.960 "traddr": "10.0.0.1", 00:14:50.960 "trsvcid": "58360" 00:14:50.960 }, 00:14:50.960 "auth": { 00:14:50.960 "state": "completed", 00:14:50.960 "digest": "sha384", 00:14:50.960 "dhgroup": "ffdhe4096" 00:14:50.960 } 00:14:50.960 } 00:14:50.960 ]' 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.960 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.219 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:51.219 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:51.788 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.788 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:51.788 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.788 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.788 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.788 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.788 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:51.788 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.048 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.308 00:14:52.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.308 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.567 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.567 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.567 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.567 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.567 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.567 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.567 { 00:14:52.567 "cntlid": 75, 00:14:52.567 "qid": 0, 00:14:52.567 "state": "enabled", 00:14:52.567 "thread": "nvmf_tgt_poll_group_000", 00:14:52.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:52.567 "listen_address": { 00:14:52.567 "trtype": "TCP", 00:14:52.567 "adrfam": "IPv4", 00:14:52.567 "traddr": "10.0.0.3", 00:14:52.567 "trsvcid": "4420" 00:14:52.567 }, 00:14:52.567 "peer_address": { 00:14:52.567 "trtype": "TCP", 00:14:52.567 "adrfam": "IPv4", 00:14:52.567 "traddr": "10.0.0.1", 00:14:52.567 "trsvcid": "56574" 00:14:52.567 }, 00:14:52.567 "auth": { 00:14:52.567 "state": "completed", 00:14:52.567 "digest": "sha384", 00:14:52.567 "dhgroup": "ffdhe4096" 00:14:52.567 } 00:14:52.567 } 00:14:52.567 ]' 00:14:52.567 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.567 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:52.567 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.567 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:52.567 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.826 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.826 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.826 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.826 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:52.826 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:53.394 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.394 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:53.394 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.394 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.394 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.394 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.394 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:53.394 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.654 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.913 00:14:53.913 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.913 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.913 08:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.173 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.173 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.173 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.173 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.173 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.173 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.173 { 00:14:54.173 "cntlid": 77, 00:14:54.173 "qid": 0, 00:14:54.173 "state": "enabled", 00:14:54.173 "thread": "nvmf_tgt_poll_group_000", 00:14:54.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:54.173 "listen_address": { 00:14:54.173 "trtype": "TCP", 00:14:54.173 "adrfam": "IPv4", 00:14:54.173 "traddr": "10.0.0.3", 00:14:54.173 "trsvcid": "4420" 00:14:54.173 }, 00:14:54.173 "peer_address": { 00:14:54.173 "trtype": "TCP", 00:14:54.173 "adrfam": "IPv4", 00:14:54.173 "traddr": "10.0.0.1", 00:14:54.173 "trsvcid": "56604" 00:14:54.173 }, 00:14:54.173 "auth": { 00:14:54.173 "state": "completed", 00:14:54.173 "digest": "sha384", 00:14:54.173 "dhgroup": "ffdhe4096" 00:14:54.173 } 00:14:54.173 } 00:14:54.173 ]' 00:14:54.173 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.432 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.432 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.432 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:54.432 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.433 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.433 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.433 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.692 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:54.692 08:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:14:55.261 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.261 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:55.261 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.261 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.261 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.261 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.261 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:55.261 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:55.521 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:55.780 00:14:55.780 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.780 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.780 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.039 { 00:14:56.039 "cntlid": 79, 00:14:56.039 "qid": 0, 00:14:56.039 "state": "enabled", 00:14:56.039 "thread": "nvmf_tgt_poll_group_000", 00:14:56.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:56.039 "listen_address": { 00:14:56.039 "trtype": "TCP", 00:14:56.039 "adrfam": "IPv4", 00:14:56.039 "traddr": "10.0.0.3", 00:14:56.039 "trsvcid": "4420" 00:14:56.039 }, 00:14:56.039 "peer_address": { 00:14:56.039 "trtype": "TCP", 00:14:56.039 "adrfam": "IPv4", 00:14:56.039 "traddr": "10.0.0.1", 00:14:56.039 "trsvcid": "56638" 00:14:56.039 }, 00:14:56.039 "auth": { 00:14:56.039 "state": "completed", 00:14:56.039 "digest": "sha384", 00:14:56.039 "dhgroup": "ffdhe4096" 00:14:56.039 } 00:14:56.039 } 00:14:56.039 ]' 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.039 08:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.299 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:56.299 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:14:56.872 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.872 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:56.872 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.872 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.872 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.872 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:56.872 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.872 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:56.872 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.131 08:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.390 00:14:57.650 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.650 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.650 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.650 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.650 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.650 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.650 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.650 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.650 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.650 { 00:14:57.650 "cntlid": 81, 00:14:57.650 "qid": 0, 00:14:57.650 "state": "enabled", 00:14:57.650 "thread": "nvmf_tgt_poll_group_000", 00:14:57.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:57.650 "listen_address": { 00:14:57.650 "trtype": "TCP", 00:14:57.650 "adrfam": "IPv4", 00:14:57.650 "traddr": "10.0.0.3", 00:14:57.650 "trsvcid": "4420" 00:14:57.650 }, 00:14:57.650 "peer_address": { 00:14:57.650 "trtype": "TCP", 00:14:57.650 "adrfam": "IPv4", 00:14:57.650 "traddr": "10.0.0.1", 00:14:57.650 "trsvcid": "56666" 00:14:57.650 }, 00:14:57.650 "auth": { 00:14:57.650 "state": "completed", 00:14:57.650 "digest": "sha384", 00:14:57.650 "dhgroup": "ffdhe6144" 00:14:57.650 } 00:14:57.650 } 00:14:57.650 ]' 00:14:57.650 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.909 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.909 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.909 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:57.909 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.909 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.909 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.909 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.169 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:58.169 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:14:58.737 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.737 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:14:58.737 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.737 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.737 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.737 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.737 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:58.737 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.997 08:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.256 00:14:59.256 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.256 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.256 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.515 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.515 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.515 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.515 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.515 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.515 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.515 { 00:14:59.515 "cntlid": 83, 00:14:59.515 "qid": 0, 00:14:59.515 "state": "enabled", 00:14:59.515 "thread": "nvmf_tgt_poll_group_000", 00:14:59.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:14:59.515 "listen_address": { 00:14:59.515 "trtype": "TCP", 00:14:59.515 "adrfam": "IPv4", 00:14:59.515 "traddr": "10.0.0.3", 00:14:59.515 "trsvcid": "4420" 00:14:59.515 }, 00:14:59.515 "peer_address": { 00:14:59.515 "trtype": "TCP", 00:14:59.515 "adrfam": "IPv4", 00:14:59.515 "traddr": "10.0.0.1", 00:14:59.515 "trsvcid": "56692" 00:14:59.515 }, 00:14:59.515 "auth": { 00:14:59.515 "state": "completed", 00:14:59.515 "digest": "sha384", 00:14:59.515 "dhgroup": "ffdhe6144" 00:14:59.515 } 00:14:59.515 } 00:14:59.515 ]' 00:14:59.515 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.515 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.515 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.515 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:59.515 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.775 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.775 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.775 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.775 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:14:59.775 08:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:00.344 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.344 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:00.344 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.344 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.344 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.344 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.344 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:00.344 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:00.603 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:00.604 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.604 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:00.604 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:00.604 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:00.604 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.604 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.604 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.604 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.604 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.604 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.604 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.604 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.172 00:15:01.172 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.172 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.172 08:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.172 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.172 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.172 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.172 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.172 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.172 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.172 { 00:15:01.172 "cntlid": 85, 00:15:01.172 "qid": 0, 00:15:01.172 "state": "enabled", 00:15:01.172 "thread": "nvmf_tgt_poll_group_000", 00:15:01.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:01.172 "listen_address": { 00:15:01.172 "trtype": "TCP", 00:15:01.172 "adrfam": "IPv4", 00:15:01.172 "traddr": "10.0.0.3", 00:15:01.172 "trsvcid": "4420" 00:15:01.172 }, 00:15:01.172 "peer_address": { 00:15:01.172 "trtype": "TCP", 00:15:01.172 "adrfam": "IPv4", 00:15:01.172 "traddr": "10.0.0.1", 00:15:01.172 "trsvcid": "56720" 00:15:01.172 }, 00:15:01.172 "auth": { 00:15:01.172 "state": "completed", 00:15:01.172 "digest": "sha384", 00:15:01.172 "dhgroup": "ffdhe6144" 00:15:01.172 } 00:15:01.172 } 00:15:01.172 ]' 00:15:01.172 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.173 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.173 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.432 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:01.432 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.432 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.432 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.432 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.694 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:01.694 08:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.323 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.892 00:15:02.892 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.892 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.892 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.892 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.892 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.892 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.892 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.892 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.892 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.892 { 00:15:02.892 "cntlid": 87, 00:15:02.892 "qid": 0, 00:15:02.892 "state": "enabled", 00:15:02.892 "thread": "nvmf_tgt_poll_group_000", 00:15:02.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:02.892 "listen_address": { 00:15:02.892 "trtype": "TCP", 00:15:02.892 "adrfam": "IPv4", 00:15:02.892 "traddr": "10.0.0.3", 00:15:02.892 "trsvcid": "4420" 00:15:02.892 }, 00:15:02.892 "peer_address": { 00:15:02.892 "trtype": "TCP", 00:15:02.892 "adrfam": "IPv4", 00:15:02.892 "traddr": "10.0.0.1", 00:15:02.892 "trsvcid": "39376" 00:15:02.892 }, 00:15:02.892 "auth": { 00:15:02.892 "state": "completed", 00:15:02.892 "digest": "sha384", 00:15:02.892 "dhgroup": "ffdhe6144" 00:15:02.892 } 00:15:02.892 } 00:15:02.892 ]' 00:15:02.892 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.892 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.892 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.152 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:03.152 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.152 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.152 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.152 08:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.411 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:03.411 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.983 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.984 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.984 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.552 00:15:04.552 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.552 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.552 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.811 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.811 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.811 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.811 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.811 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.811 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.811 { 00:15:04.811 "cntlid": 89, 00:15:04.811 "qid": 0, 00:15:04.811 "state": "enabled", 00:15:04.811 "thread": "nvmf_tgt_poll_group_000", 00:15:04.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:04.811 "listen_address": { 00:15:04.811 "trtype": "TCP", 00:15:04.811 "adrfam": "IPv4", 00:15:04.811 "traddr": "10.0.0.3", 00:15:04.811 "trsvcid": "4420" 00:15:04.811 }, 00:15:04.811 "peer_address": { 00:15:04.811 "trtype": "TCP", 00:15:04.811 "adrfam": "IPv4", 00:15:04.811 "traddr": "10.0.0.1", 00:15:04.811 "trsvcid": "39390" 00:15:04.811 }, 00:15:04.811 "auth": { 00:15:04.811 "state": "completed", 00:15:04.811 "digest": "sha384", 00:15:04.811 "dhgroup": "ffdhe8192" 00:15:04.811 } 00:15:04.811 } 00:15:04.811 ]' 00:15:04.811 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.812 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.812 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.812 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:04.812 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.071 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.071 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.071 08:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.071 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:05.071 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:05.640 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.640 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:05.640 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.640 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.640 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.640 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.640 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:05.640 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.900 08:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.469 00:15:06.469 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.469 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.469 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.729 { 00:15:06.729 "cntlid": 91, 00:15:06.729 "qid": 0, 00:15:06.729 "state": "enabled", 00:15:06.729 "thread": "nvmf_tgt_poll_group_000", 00:15:06.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:06.729 "listen_address": { 00:15:06.729 "trtype": "TCP", 00:15:06.729 "adrfam": "IPv4", 00:15:06.729 "traddr": "10.0.0.3", 00:15:06.729 "trsvcid": "4420" 00:15:06.729 }, 00:15:06.729 "peer_address": { 00:15:06.729 "trtype": "TCP", 00:15:06.729 "adrfam": "IPv4", 00:15:06.729 "traddr": "10.0.0.1", 00:15:06.729 "trsvcid": "39424" 00:15:06.729 }, 00:15:06.729 "auth": { 00:15:06.729 "state": "completed", 00:15:06.729 "digest": "sha384", 00:15:06.729 "dhgroup": "ffdhe8192" 00:15:06.729 } 00:15:06.729 } 00:15:06.729 ]' 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.729 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.988 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:06.988 08:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:07.558 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.558 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:07.558 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.558 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.558 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.558 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.558 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:07.558 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.818 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.387 00:15:08.387 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.387 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.387 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.646 { 00:15:08.646 "cntlid": 93, 00:15:08.646 "qid": 0, 00:15:08.646 "state": "enabled", 00:15:08.646 "thread": "nvmf_tgt_poll_group_000", 00:15:08.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:08.646 "listen_address": { 00:15:08.646 "trtype": "TCP", 00:15:08.646 "adrfam": "IPv4", 00:15:08.646 "traddr": "10.0.0.3", 00:15:08.646 "trsvcid": "4420" 00:15:08.646 }, 00:15:08.646 "peer_address": { 00:15:08.646 "trtype": "TCP", 00:15:08.646 "adrfam": "IPv4", 00:15:08.646 "traddr": "10.0.0.1", 00:15:08.646 "trsvcid": "39464" 00:15:08.646 }, 00:15:08.646 "auth": { 00:15:08.646 "state": "completed", 00:15:08.646 "digest": "sha384", 00:15:08.646 "dhgroup": "ffdhe8192" 00:15:08.646 } 00:15:08.646 } 00:15:08.646 ]' 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.646 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.906 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:08.906 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:09.474 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.474 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:09.474 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.474 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.474 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.474 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.474 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:09.474 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.733 08:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.302 00:15:10.302 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.302 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.303 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.303 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.303 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.303 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.303 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.303 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.303 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.303 { 00:15:10.303 "cntlid": 95, 00:15:10.303 "qid": 0, 00:15:10.303 "state": "enabled", 00:15:10.303 "thread": "nvmf_tgt_poll_group_000", 00:15:10.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:10.303 "listen_address": { 00:15:10.303 "trtype": "TCP", 00:15:10.303 "adrfam": "IPv4", 00:15:10.303 "traddr": "10.0.0.3", 00:15:10.303 "trsvcid": "4420" 00:15:10.303 }, 00:15:10.303 "peer_address": { 00:15:10.303 "trtype": "TCP", 00:15:10.303 "adrfam": "IPv4", 00:15:10.303 "traddr": "10.0.0.1", 00:15:10.303 "trsvcid": "39490" 00:15:10.303 }, 00:15:10.303 "auth": { 00:15:10.303 "state": "completed", 00:15:10.303 "digest": "sha384", 00:15:10.303 "dhgroup": "ffdhe8192" 00:15:10.303 } 00:15:10.303 } 00:15:10.303 ]' 00:15:10.303 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.562 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.562 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.562 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:10.562 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.562 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.562 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.562 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.821 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:10.821 08:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:11.390 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.390 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:11.390 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.390 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.390 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.390 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:11.390 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:11.390 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.390 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:11.390 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.650 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.911 00:15:11.911 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.911 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.911 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.911 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.911 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.911 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.911 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.911 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.911 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.911 { 00:15:11.911 "cntlid": 97, 00:15:11.911 "qid": 0, 00:15:11.911 "state": "enabled", 00:15:11.911 "thread": "nvmf_tgt_poll_group_000", 00:15:11.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:11.911 "listen_address": { 00:15:11.911 "trtype": "TCP", 00:15:11.911 "adrfam": "IPv4", 00:15:11.911 "traddr": "10.0.0.3", 00:15:11.911 "trsvcid": "4420" 00:15:11.911 }, 00:15:11.911 "peer_address": { 00:15:11.911 "trtype": "TCP", 00:15:11.911 "adrfam": "IPv4", 00:15:11.911 "traddr": "10.0.0.1", 00:15:11.911 "trsvcid": "39528" 00:15:11.911 }, 00:15:11.911 "auth": { 00:15:11.911 "state": "completed", 00:15:11.911 "digest": "sha512", 00:15:11.911 "dhgroup": "null" 00:15:11.911 } 00:15:11.911 } 00:15:11.911 ]' 00:15:11.911 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.170 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.170 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.170 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:12.170 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.171 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.171 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.171 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.432 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:12.432 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:13.000 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.000 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:13.000 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.000 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.000 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.000 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.000 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:13.000 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.260 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.518 00:15:13.519 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.519 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.519 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.519 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.519 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.519 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.519 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.777 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.777 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.777 { 00:15:13.777 "cntlid": 99, 00:15:13.777 "qid": 0, 00:15:13.777 "state": "enabled", 00:15:13.777 "thread": "nvmf_tgt_poll_group_000", 00:15:13.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:13.777 "listen_address": { 00:15:13.777 "trtype": "TCP", 00:15:13.777 "adrfam": "IPv4", 00:15:13.777 "traddr": "10.0.0.3", 00:15:13.777 "trsvcid": "4420" 00:15:13.777 }, 00:15:13.777 "peer_address": { 00:15:13.777 "trtype": "TCP", 00:15:13.777 "adrfam": "IPv4", 00:15:13.777 "traddr": "10.0.0.1", 00:15:13.777 "trsvcid": "41202" 00:15:13.777 }, 00:15:13.777 "auth": { 00:15:13.777 "state": "completed", 00:15:13.777 "digest": "sha512", 00:15:13.777 "dhgroup": "null" 00:15:13.777 } 00:15:13.777 } 00:15:13.777 ]' 00:15:13.777 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.777 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:13.777 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.777 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:13.777 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.777 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.777 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.777 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.037 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:14.037 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:14.605 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.605 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:14.605 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.605 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.605 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.605 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.605 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:14.605 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.864 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.124 00:15:15.124 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.124 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.124 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.383 { 00:15:15.383 "cntlid": 101, 00:15:15.383 "qid": 0, 00:15:15.383 "state": "enabled", 00:15:15.383 "thread": "nvmf_tgt_poll_group_000", 00:15:15.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:15.383 "listen_address": { 00:15:15.383 "trtype": "TCP", 00:15:15.383 "adrfam": "IPv4", 00:15:15.383 "traddr": "10.0.0.3", 00:15:15.383 "trsvcid": "4420" 00:15:15.383 }, 00:15:15.383 "peer_address": { 00:15:15.383 "trtype": "TCP", 00:15:15.383 "adrfam": "IPv4", 00:15:15.383 "traddr": "10.0.0.1", 00:15:15.383 "trsvcid": "41226" 00:15:15.383 }, 00:15:15.383 "auth": { 00:15:15.383 "state": "completed", 00:15:15.383 "digest": "sha512", 00:15:15.383 "dhgroup": "null" 00:15:15.383 } 00:15:15.383 } 00:15:15.383 ]' 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.383 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.643 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:15.643 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:16.212 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.212 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:16.212 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.212 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.212 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.212 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.212 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:16.212 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:16.472 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:16.730 00:15:16.730 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.730 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.730 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.987 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.987 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.987 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.987 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.987 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.987 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.987 { 00:15:16.987 "cntlid": 103, 00:15:16.987 "qid": 0, 00:15:16.987 "state": "enabled", 00:15:16.987 "thread": "nvmf_tgt_poll_group_000", 00:15:16.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:16.987 "listen_address": { 00:15:16.987 "trtype": "TCP", 00:15:16.987 "adrfam": "IPv4", 00:15:16.987 "traddr": "10.0.0.3", 00:15:16.987 "trsvcid": "4420" 00:15:16.987 }, 00:15:16.987 "peer_address": { 00:15:16.988 "trtype": "TCP", 00:15:16.988 "adrfam": "IPv4", 00:15:16.988 "traddr": "10.0.0.1", 00:15:16.988 "trsvcid": "41258" 00:15:16.988 }, 00:15:16.988 "auth": { 00:15:16.988 "state": "completed", 00:15:16.988 "digest": "sha512", 00:15:16.988 "dhgroup": "null" 00:15:16.988 } 00:15:16.988 } 00:15:16.988 ]' 00:15:16.988 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.988 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.988 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.988 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:16.988 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.988 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.988 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.988 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.246 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:17.246 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:17.815 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.815 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:17.815 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.815 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.815 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.815 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.815 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.815 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:17.815 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.074 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.333 00:15:18.333 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.333 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.333 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.592 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.592 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.592 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.592 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.592 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.592 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.592 { 00:15:18.592 "cntlid": 105, 00:15:18.592 "qid": 0, 00:15:18.592 "state": "enabled", 00:15:18.592 "thread": "nvmf_tgt_poll_group_000", 00:15:18.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:18.592 "listen_address": { 00:15:18.592 "trtype": "TCP", 00:15:18.592 "adrfam": "IPv4", 00:15:18.592 "traddr": "10.0.0.3", 00:15:18.592 "trsvcid": "4420" 00:15:18.592 }, 00:15:18.592 "peer_address": { 00:15:18.592 "trtype": "TCP", 00:15:18.592 "adrfam": "IPv4", 00:15:18.592 "traddr": "10.0.0.1", 00:15:18.592 "trsvcid": "41282" 00:15:18.592 }, 00:15:18.592 "auth": { 00:15:18.592 "state": "completed", 00:15:18.592 "digest": "sha512", 00:15:18.592 "dhgroup": "ffdhe2048" 00:15:18.592 } 00:15:18.592 } 00:15:18.592 ]' 00:15:18.592 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.592 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.592 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.592 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:18.592 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.592 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.851 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.851 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.851 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:18.851 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:19.419 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.419 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:19.419 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.419 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.419 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.419 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.419 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:19.419 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.679 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.938 00:15:19.938 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.938 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.938 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.197 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.197 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.197 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.197 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.197 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.197 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.197 { 00:15:20.197 "cntlid": 107, 00:15:20.197 "qid": 0, 00:15:20.197 "state": "enabled", 00:15:20.197 "thread": "nvmf_tgt_poll_group_000", 00:15:20.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:20.197 "listen_address": { 00:15:20.197 "trtype": "TCP", 00:15:20.197 "adrfam": "IPv4", 00:15:20.197 "traddr": "10.0.0.3", 00:15:20.197 "trsvcid": "4420" 00:15:20.197 }, 00:15:20.197 "peer_address": { 00:15:20.197 "trtype": "TCP", 00:15:20.197 "adrfam": "IPv4", 00:15:20.197 "traddr": "10.0.0.1", 00:15:20.197 "trsvcid": "41314" 00:15:20.197 }, 00:15:20.197 "auth": { 00:15:20.197 "state": "completed", 00:15:20.197 "digest": "sha512", 00:15:20.197 "dhgroup": "ffdhe2048" 00:15:20.197 } 00:15:20.197 } 00:15:20.197 ]' 00:15:20.197 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.197 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.197 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.197 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:20.197 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.456 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.456 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.456 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.456 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:20.456 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:21.025 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.025 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:21.025 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.025 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.025 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.025 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.025 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:21.025 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.285 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.544 00:15:21.544 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.544 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.544 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.803 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.803 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.803 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.803 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.803 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.803 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.803 { 00:15:21.803 "cntlid": 109, 00:15:21.803 "qid": 0, 00:15:21.803 "state": "enabled", 00:15:21.803 "thread": "nvmf_tgt_poll_group_000", 00:15:21.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:21.803 "listen_address": { 00:15:21.803 "trtype": "TCP", 00:15:21.803 "adrfam": "IPv4", 00:15:21.803 "traddr": "10.0.0.3", 00:15:21.803 "trsvcid": "4420" 00:15:21.803 }, 00:15:21.803 "peer_address": { 00:15:21.803 "trtype": "TCP", 00:15:21.803 "adrfam": "IPv4", 00:15:21.803 "traddr": "10.0.0.1", 00:15:21.803 "trsvcid": "41336" 00:15:21.803 }, 00:15:21.803 "auth": { 00:15:21.803 "state": "completed", 00:15:21.803 "digest": "sha512", 00:15:21.803 "dhgroup": "ffdhe2048" 00:15:21.803 } 00:15:21.803 } 00:15:21.803 ]' 00:15:21.803 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.803 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:21.803 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.063 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:22.063 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.063 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.063 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.063 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.322 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:22.322 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.891 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.150 00:15:23.410 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.410 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.410 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.410 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.410 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.410 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.410 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.410 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.410 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.410 { 00:15:23.410 "cntlid": 111, 00:15:23.410 "qid": 0, 00:15:23.410 "state": "enabled", 00:15:23.410 "thread": "nvmf_tgt_poll_group_000", 00:15:23.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:23.410 "listen_address": { 00:15:23.410 "trtype": "TCP", 00:15:23.410 "adrfam": "IPv4", 00:15:23.410 "traddr": "10.0.0.3", 00:15:23.410 "trsvcid": "4420" 00:15:23.410 }, 00:15:23.410 "peer_address": { 00:15:23.410 "trtype": "TCP", 00:15:23.410 "adrfam": "IPv4", 00:15:23.410 "traddr": "10.0.0.1", 00:15:23.410 "trsvcid": "34682" 00:15:23.410 }, 00:15:23.410 "auth": { 00:15:23.410 "state": "completed", 00:15:23.410 "digest": "sha512", 00:15:23.410 "dhgroup": "ffdhe2048" 00:15:23.410 } 00:15:23.410 } 00:15:23.410 ]' 00:15:23.410 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.669 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:23.669 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.669 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.669 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.669 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.669 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.669 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.929 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:23.929 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:24.496 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.496 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:24.496 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.496 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.497 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.497 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.497 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.497 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:24.497 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.756 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.015 00:15:25.015 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.015 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.015 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.015 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.015 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.015 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.015 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.015 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.015 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.015 { 00:15:25.015 "cntlid": 113, 00:15:25.015 "qid": 0, 00:15:25.015 "state": "enabled", 00:15:25.015 "thread": "nvmf_tgt_poll_group_000", 00:15:25.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:25.015 "listen_address": { 00:15:25.015 "trtype": "TCP", 00:15:25.015 "adrfam": "IPv4", 00:15:25.015 "traddr": "10.0.0.3", 00:15:25.015 "trsvcid": "4420" 00:15:25.015 }, 00:15:25.015 "peer_address": { 00:15:25.015 "trtype": "TCP", 00:15:25.015 "adrfam": "IPv4", 00:15:25.015 "traddr": "10.0.0.1", 00:15:25.015 "trsvcid": "34720" 00:15:25.015 }, 00:15:25.015 "auth": { 00:15:25.015 "state": "completed", 00:15:25.015 "digest": "sha512", 00:15:25.015 "dhgroup": "ffdhe3072" 00:15:25.015 } 00:15:25.015 } 00:15:25.015 ]' 00:15:25.015 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.275 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:25.275 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.275 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:25.275 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.275 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.275 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.275 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.534 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:25.534 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:26.103 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.103 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:26.103 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.103 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.103 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.103 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.103 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:26.103 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:26.103 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:26.103 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.103 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:26.103 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:26.103 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:26.103 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.103 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.103 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.103 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.362 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.362 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.362 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.363 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.622 00:15:26.622 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.622 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.622 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.622 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.622 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.622 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.622 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.622 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.622 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.622 { 00:15:26.622 "cntlid": 115, 00:15:26.622 "qid": 0, 00:15:26.622 "state": "enabled", 00:15:26.622 "thread": "nvmf_tgt_poll_group_000", 00:15:26.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:26.622 "listen_address": { 00:15:26.622 "trtype": "TCP", 00:15:26.622 "adrfam": "IPv4", 00:15:26.622 "traddr": "10.0.0.3", 00:15:26.622 "trsvcid": "4420" 00:15:26.622 }, 00:15:26.622 "peer_address": { 00:15:26.622 "trtype": "TCP", 00:15:26.622 "adrfam": "IPv4", 00:15:26.622 "traddr": "10.0.0.1", 00:15:26.622 "trsvcid": "34748" 00:15:26.622 }, 00:15:26.622 "auth": { 00:15:26.622 "state": "completed", 00:15:26.622 "digest": "sha512", 00:15:26.622 "dhgroup": "ffdhe3072" 00:15:26.622 } 00:15:26.622 } 00:15:26.622 ]' 00:15:26.622 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.881 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.881 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.881 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:26.881 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.881 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.881 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.881 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.140 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:27.140 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:27.709 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.709 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:27.709 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.709 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.709 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.709 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.709 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:27.709 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.968 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.256 00:15:28.256 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.256 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.256 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.256 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.256 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.256 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.256 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.559 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.559 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.559 { 00:15:28.559 "cntlid": 117, 00:15:28.559 "qid": 0, 00:15:28.559 "state": "enabled", 00:15:28.559 "thread": "nvmf_tgt_poll_group_000", 00:15:28.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:28.559 "listen_address": { 00:15:28.559 "trtype": "TCP", 00:15:28.559 "adrfam": "IPv4", 00:15:28.559 "traddr": "10.0.0.3", 00:15:28.559 "trsvcid": "4420" 00:15:28.559 }, 00:15:28.559 "peer_address": { 00:15:28.559 "trtype": "TCP", 00:15:28.559 "adrfam": "IPv4", 00:15:28.559 "traddr": "10.0.0.1", 00:15:28.559 "trsvcid": "34774" 00:15:28.559 }, 00:15:28.559 "auth": { 00:15:28.559 "state": "completed", 00:15:28.559 "digest": "sha512", 00:15:28.559 "dhgroup": "ffdhe3072" 00:15:28.559 } 00:15:28.559 } 00:15:28.559 ]' 00:15:28.559 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.559 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:28.559 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.559 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:28.559 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.559 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.559 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.559 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.818 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:28.818 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.387 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.646 00:15:29.906 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.906 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.906 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.906 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.906 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.906 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.906 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.906 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.906 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.906 { 00:15:29.906 "cntlid": 119, 00:15:29.906 "qid": 0, 00:15:29.906 "state": "enabled", 00:15:29.906 "thread": "nvmf_tgt_poll_group_000", 00:15:29.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:29.906 "listen_address": { 00:15:29.906 "trtype": "TCP", 00:15:29.906 "adrfam": "IPv4", 00:15:29.906 "traddr": "10.0.0.3", 00:15:29.906 "trsvcid": "4420" 00:15:29.906 }, 00:15:29.906 "peer_address": { 00:15:29.906 "trtype": "TCP", 00:15:29.906 "adrfam": "IPv4", 00:15:29.906 "traddr": "10.0.0.1", 00:15:29.906 "trsvcid": "34794" 00:15:29.906 }, 00:15:29.906 "auth": { 00:15:29.906 "state": "completed", 00:15:29.906 "digest": "sha512", 00:15:29.906 "dhgroup": "ffdhe3072" 00:15:29.906 } 00:15:29.906 } 00:15:29.906 ]' 00:15:29.906 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.165 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:30.165 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.165 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.165 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.165 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.165 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.165 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.424 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:30.424 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.992 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.561 00:15:31.561 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.561 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.561 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.561 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.561 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.561 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.561 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.561 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.561 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.561 { 00:15:31.561 "cntlid": 121, 00:15:31.561 "qid": 0, 00:15:31.561 "state": "enabled", 00:15:31.561 "thread": "nvmf_tgt_poll_group_000", 00:15:31.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:31.561 "listen_address": { 00:15:31.561 "trtype": "TCP", 00:15:31.561 "adrfam": "IPv4", 00:15:31.561 "traddr": "10.0.0.3", 00:15:31.561 "trsvcid": "4420" 00:15:31.561 }, 00:15:31.561 "peer_address": { 00:15:31.561 "trtype": "TCP", 00:15:31.561 "adrfam": "IPv4", 00:15:31.561 "traddr": "10.0.0.1", 00:15:31.561 "trsvcid": "34832" 00:15:31.561 }, 00:15:31.561 "auth": { 00:15:31.561 "state": "completed", 00:15:31.561 "digest": "sha512", 00:15:31.561 "dhgroup": "ffdhe4096" 00:15:31.561 } 00:15:31.561 } 00:15:31.561 ]' 00:15:31.561 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.820 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:31.820 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.820 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.820 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.820 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.820 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.820 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.080 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:32.080 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:32.649 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.649 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:32.649 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.649 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.649 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.649 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.649 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:32.649 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.908 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.167 00:15:33.168 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.168 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.168 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.427 { 00:15:33.427 "cntlid": 123, 00:15:33.427 "qid": 0, 00:15:33.427 "state": "enabled", 00:15:33.427 "thread": "nvmf_tgt_poll_group_000", 00:15:33.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:33.427 "listen_address": { 00:15:33.427 "trtype": "TCP", 00:15:33.427 "adrfam": "IPv4", 00:15:33.427 "traddr": "10.0.0.3", 00:15:33.427 "trsvcid": "4420" 00:15:33.427 }, 00:15:33.427 "peer_address": { 00:15:33.427 "trtype": "TCP", 00:15:33.427 "adrfam": "IPv4", 00:15:33.427 "traddr": "10.0.0.1", 00:15:33.427 "trsvcid": "44994" 00:15:33.427 }, 00:15:33.427 "auth": { 00:15:33.427 "state": "completed", 00:15:33.427 "digest": "sha512", 00:15:33.427 "dhgroup": "ffdhe4096" 00:15:33.427 } 00:15:33.427 } 00:15:33.427 ]' 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.427 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.686 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:33.686 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:34.255 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.255 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:34.256 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.256 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.256 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.256 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.256 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:34.256 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.515 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.774 00:15:34.774 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.774 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.774 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.032 { 00:15:35.032 "cntlid": 125, 00:15:35.032 "qid": 0, 00:15:35.032 "state": "enabled", 00:15:35.032 "thread": "nvmf_tgt_poll_group_000", 00:15:35.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:35.032 "listen_address": { 00:15:35.032 "trtype": "TCP", 00:15:35.032 "adrfam": "IPv4", 00:15:35.032 "traddr": "10.0.0.3", 00:15:35.032 "trsvcid": "4420" 00:15:35.032 }, 00:15:35.032 "peer_address": { 00:15:35.032 "trtype": "TCP", 00:15:35.032 "adrfam": "IPv4", 00:15:35.032 "traddr": "10.0.0.1", 00:15:35.032 "trsvcid": "45018" 00:15:35.032 }, 00:15:35.032 "auth": { 00:15:35.032 "state": "completed", 00:15:35.032 "digest": "sha512", 00:15:35.032 "dhgroup": "ffdhe4096" 00:15:35.032 } 00:15:35.032 } 00:15:35.032 ]' 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.032 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.291 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:35.291 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:35.860 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.860 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:35.860 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.860 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.860 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.860 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.860 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:35.860 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:36.120 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:36.380 00:15:36.380 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.380 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.380 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.639 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.639 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.639 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.640 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.640 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.640 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.640 { 00:15:36.640 "cntlid": 127, 00:15:36.640 "qid": 0, 00:15:36.640 "state": "enabled", 00:15:36.640 "thread": "nvmf_tgt_poll_group_000", 00:15:36.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:36.640 "listen_address": { 00:15:36.640 "trtype": "TCP", 00:15:36.640 "adrfam": "IPv4", 00:15:36.640 "traddr": "10.0.0.3", 00:15:36.640 "trsvcid": "4420" 00:15:36.640 }, 00:15:36.640 "peer_address": { 00:15:36.640 "trtype": "TCP", 00:15:36.640 "adrfam": "IPv4", 00:15:36.640 "traddr": "10.0.0.1", 00:15:36.640 "trsvcid": "45044" 00:15:36.640 }, 00:15:36.640 "auth": { 00:15:36.640 "state": "completed", 00:15:36.640 "digest": "sha512", 00:15:36.640 "dhgroup": "ffdhe4096" 00:15:36.640 } 00:15:36.640 } 00:15:36.640 ]' 00:15:36.640 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.640 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:36.640 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.640 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:36.640 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.899 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.899 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.899 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.899 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:36.899 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:37.468 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.468 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:37.469 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.469 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.469 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.469 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.469 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.469 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:37.469 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.729 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.297 00:15:38.298 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.298 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.298 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.298 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.298 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.298 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.298 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.298 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.298 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.298 { 00:15:38.298 "cntlid": 129, 00:15:38.298 "qid": 0, 00:15:38.298 "state": "enabled", 00:15:38.298 "thread": "nvmf_tgt_poll_group_000", 00:15:38.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:38.298 "listen_address": { 00:15:38.298 "trtype": "TCP", 00:15:38.298 "adrfam": "IPv4", 00:15:38.298 "traddr": "10.0.0.3", 00:15:38.298 "trsvcid": "4420" 00:15:38.298 }, 00:15:38.298 "peer_address": { 00:15:38.298 "trtype": "TCP", 00:15:38.298 "adrfam": "IPv4", 00:15:38.298 "traddr": "10.0.0.1", 00:15:38.298 "trsvcid": "45084" 00:15:38.298 }, 00:15:38.298 "auth": { 00:15:38.298 "state": "completed", 00:15:38.298 "digest": "sha512", 00:15:38.298 "dhgroup": "ffdhe6144" 00:15:38.298 } 00:15:38.298 } 00:15:38.298 ]' 00:15:38.298 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.557 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:38.557 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.557 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.557 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.557 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.557 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.557 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.817 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:38.817 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.385 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.645 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.645 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.645 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.645 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.903 00:15:39.903 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.903 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.903 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.163 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.163 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.163 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.163 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.163 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.163 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.163 { 00:15:40.163 "cntlid": 131, 00:15:40.163 "qid": 0, 00:15:40.163 "state": "enabled", 00:15:40.163 "thread": "nvmf_tgt_poll_group_000", 00:15:40.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:40.163 "listen_address": { 00:15:40.163 "trtype": "TCP", 00:15:40.163 "adrfam": "IPv4", 00:15:40.163 "traddr": "10.0.0.3", 00:15:40.163 "trsvcid": "4420" 00:15:40.163 }, 00:15:40.163 "peer_address": { 00:15:40.163 "trtype": "TCP", 00:15:40.163 "adrfam": "IPv4", 00:15:40.163 "traddr": "10.0.0.1", 00:15:40.163 "trsvcid": "45122" 00:15:40.163 }, 00:15:40.163 "auth": { 00:15:40.163 "state": "completed", 00:15:40.163 "digest": "sha512", 00:15:40.163 "dhgroup": "ffdhe6144" 00:15:40.163 } 00:15:40.163 } 00:15:40.163 ]' 00:15:40.163 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.163 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:40.163 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.163 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:40.163 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.422 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.422 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.422 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.422 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:40.422 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:40.990 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.990 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:40.990 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.990 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.990 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.990 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.990 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:40.990 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.250 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.818 00:15:41.818 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.818 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.818 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.077 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.077 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.077 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.077 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.078 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.078 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.078 { 00:15:42.078 "cntlid": 133, 00:15:42.078 "qid": 0, 00:15:42.078 "state": "enabled", 00:15:42.078 "thread": "nvmf_tgt_poll_group_000", 00:15:42.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:42.078 "listen_address": { 00:15:42.078 "trtype": "TCP", 00:15:42.078 "adrfam": "IPv4", 00:15:42.078 "traddr": "10.0.0.3", 00:15:42.078 "trsvcid": "4420" 00:15:42.078 }, 00:15:42.078 "peer_address": { 00:15:42.078 "trtype": "TCP", 00:15:42.078 "adrfam": "IPv4", 00:15:42.078 "traddr": "10.0.0.1", 00:15:42.078 "trsvcid": "45154" 00:15:42.078 }, 00:15:42.078 "auth": { 00:15:42.078 "state": "completed", 00:15:42.078 "digest": "sha512", 00:15:42.078 "dhgroup": "ffdhe6144" 00:15:42.078 } 00:15:42.078 } 00:15:42.078 ]' 00:15:42.078 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.078 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:42.078 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.078 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:42.078 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.078 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.078 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.078 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.337 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:42.337 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:42.917 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.917 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:42.917 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.917 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.917 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.917 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.917 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:42.917 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:43.177 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:43.177 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.177 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:43.177 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:43.177 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.177 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.177 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:15:43.177 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.177 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.177 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.177 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.177 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.177 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.436 00:15:43.436 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.436 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.436 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.694 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.694 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.694 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.694 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.694 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.694 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.694 { 00:15:43.694 "cntlid": 135, 00:15:43.694 "qid": 0, 00:15:43.694 "state": "enabled", 00:15:43.694 "thread": "nvmf_tgt_poll_group_000", 00:15:43.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:43.694 "listen_address": { 00:15:43.694 "trtype": "TCP", 00:15:43.694 "adrfam": "IPv4", 00:15:43.694 "traddr": "10.0.0.3", 00:15:43.694 "trsvcid": "4420" 00:15:43.694 }, 00:15:43.695 "peer_address": { 00:15:43.695 "trtype": "TCP", 00:15:43.695 "adrfam": "IPv4", 00:15:43.695 "traddr": "10.0.0.1", 00:15:43.695 "trsvcid": "50846" 00:15:43.695 }, 00:15:43.695 "auth": { 00:15:43.695 "state": "completed", 00:15:43.695 "digest": "sha512", 00:15:43.695 "dhgroup": "ffdhe6144" 00:15:43.695 } 00:15:43.695 } 00:15:43.695 ]' 00:15:43.695 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.695 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.695 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.953 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:43.953 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.953 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.953 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.953 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.212 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:44.212 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.779 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.038 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.038 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.038 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.038 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.297 00:15:45.556 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.556 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.556 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.556 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.556 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.556 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.556 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.556 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.556 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.556 { 00:15:45.556 "cntlid": 137, 00:15:45.556 "qid": 0, 00:15:45.556 "state": "enabled", 00:15:45.556 "thread": "nvmf_tgt_poll_group_000", 00:15:45.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:45.556 "listen_address": { 00:15:45.556 "trtype": "TCP", 00:15:45.556 "adrfam": "IPv4", 00:15:45.556 "traddr": "10.0.0.3", 00:15:45.556 "trsvcid": "4420" 00:15:45.556 }, 00:15:45.556 "peer_address": { 00:15:45.556 "trtype": "TCP", 00:15:45.556 "adrfam": "IPv4", 00:15:45.556 "traddr": "10.0.0.1", 00:15:45.556 "trsvcid": "50892" 00:15:45.556 }, 00:15:45.556 "auth": { 00:15:45.556 "state": "completed", 00:15:45.556 "digest": "sha512", 00:15:45.556 "dhgroup": "ffdhe8192" 00:15:45.556 } 00:15:45.556 } 00:15:45.556 ]' 00:15:45.815 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.815 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.815 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.815 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.815 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.815 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.815 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.815 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.075 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:46.075 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:46.643 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.643 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:46.643 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.643 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.643 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.643 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.643 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:46.643 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.902 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.471 00:15:47.471 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.471 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.471 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.471 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.471 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.471 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.471 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.471 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.471 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.471 { 00:15:47.471 "cntlid": 139, 00:15:47.471 "qid": 0, 00:15:47.471 "state": "enabled", 00:15:47.471 "thread": "nvmf_tgt_poll_group_000", 00:15:47.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:47.471 "listen_address": { 00:15:47.471 "trtype": "TCP", 00:15:47.471 "adrfam": "IPv4", 00:15:47.471 "traddr": "10.0.0.3", 00:15:47.471 "trsvcid": "4420" 00:15:47.471 }, 00:15:47.471 "peer_address": { 00:15:47.471 "trtype": "TCP", 00:15:47.471 "adrfam": "IPv4", 00:15:47.471 "traddr": "10.0.0.1", 00:15:47.471 "trsvcid": "50908" 00:15:47.471 }, 00:15:47.471 "auth": { 00:15:47.471 "state": "completed", 00:15:47.471 "digest": "sha512", 00:15:47.471 "dhgroup": "ffdhe8192" 00:15:47.471 } 00:15:47.471 } 00:15:47.471 ]' 00:15:47.471 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.730 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.730 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.730 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.730 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.730 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.730 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.730 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.990 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:47.990 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: --dhchap-ctrl-secret DHHC-1:02:MjcyZTE5NzU1MzgyYTZkODMyMzI2MGU5ZGNlZTZlZjg3NDU2NTY5MmM2MDBjZWVlIH9GXw==: 00:15:48.559 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.560 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.128 00:15:49.128 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.128 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.128 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.387 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.387 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.387 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.387 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.387 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.387 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.387 { 00:15:49.387 "cntlid": 141, 00:15:49.387 "qid": 0, 00:15:49.387 "state": "enabled", 00:15:49.387 "thread": "nvmf_tgt_poll_group_000", 00:15:49.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:49.387 "listen_address": { 00:15:49.387 "trtype": "TCP", 00:15:49.387 "adrfam": "IPv4", 00:15:49.387 "traddr": "10.0.0.3", 00:15:49.387 "trsvcid": "4420" 00:15:49.387 }, 00:15:49.387 "peer_address": { 00:15:49.387 "trtype": "TCP", 00:15:49.387 "adrfam": "IPv4", 00:15:49.387 "traddr": "10.0.0.1", 00:15:49.387 "trsvcid": "50930" 00:15:49.387 }, 00:15:49.387 "auth": { 00:15:49.387 "state": "completed", 00:15:49.388 "digest": "sha512", 00:15:49.388 "dhgroup": "ffdhe8192" 00:15:49.388 } 00:15:49.388 } 00:15:49.388 ]' 00:15:49.388 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.388 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.388 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.647 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.647 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.647 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.647 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.647 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.906 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:49.906 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:01:MjRhODUwMDI3YzgyNWFlYTBiMTViMWY2Nzc4NTQ2NjL6MDNn: 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.475 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.043 00:15:51.043 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.043 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.043 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.302 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.302 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.302 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.302 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.302 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.302 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.302 { 00:15:51.302 "cntlid": 143, 00:15:51.302 "qid": 0, 00:15:51.302 "state": "enabled", 00:15:51.302 "thread": "nvmf_tgt_poll_group_000", 00:15:51.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:51.302 "listen_address": { 00:15:51.302 "trtype": "TCP", 00:15:51.302 "adrfam": "IPv4", 00:15:51.302 "traddr": "10.0.0.3", 00:15:51.302 "trsvcid": "4420" 00:15:51.302 }, 00:15:51.303 "peer_address": { 00:15:51.303 "trtype": "TCP", 00:15:51.303 "adrfam": "IPv4", 00:15:51.303 "traddr": "10.0.0.1", 00:15:51.303 "trsvcid": "50966" 00:15:51.303 }, 00:15:51.303 "auth": { 00:15:51.303 "state": "completed", 00:15:51.303 "digest": "sha512", 00:15:51.303 "dhgroup": "ffdhe8192" 00:15:51.303 } 00:15:51.303 } 00:15:51.303 ]' 00:15:51.303 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.303 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.303 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.303 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:51.303 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.562 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.562 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.562 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.562 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:51.562 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:15:52.130 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.130 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:52.130 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.130 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.130 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.390 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.959 00:15:52.959 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.959 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.959 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.218 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.218 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.218 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.218 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.218 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.218 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.218 { 00:15:53.218 "cntlid": 145, 00:15:53.218 "qid": 0, 00:15:53.218 "state": "enabled", 00:15:53.218 "thread": "nvmf_tgt_poll_group_000", 00:15:53.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:53.218 "listen_address": { 00:15:53.218 "trtype": "TCP", 00:15:53.218 "adrfam": "IPv4", 00:15:53.218 "traddr": "10.0.0.3", 00:15:53.218 "trsvcid": "4420" 00:15:53.218 }, 00:15:53.218 "peer_address": { 00:15:53.218 "trtype": "TCP", 00:15:53.218 "adrfam": "IPv4", 00:15:53.218 "traddr": "10.0.0.1", 00:15:53.218 "trsvcid": "51950" 00:15:53.218 }, 00:15:53.218 "auth": { 00:15:53.218 "state": "completed", 00:15:53.218 "digest": "sha512", 00:15:53.218 "dhgroup": "ffdhe8192" 00:15:53.218 } 00:15:53.218 } 00:15:53.218 ]' 00:15:53.218 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.218 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.218 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.218 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.218 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.477 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.477 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.477 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.477 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:53.477 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:00:Y2E4MGU5MzlkNTU1MTNhZmUyMzI0OTI0MjQ2ZGY3ZDJkMTRmNGZkZTFkYmEzNjM2Sljk5Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyNWI1MDA1MGQzOWFjNmQ1MTk2MjFiNzZjMzg5NjE5NGI4MGJlMDlmMjFmZWRiYzIzMjM3OTBmYjdkNDY1M/XeefM=: 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:54.045 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:54.659 request: 00:15:54.659 { 00:15:54.659 "name": "nvme0", 00:15:54.659 "trtype": "tcp", 00:15:54.659 "traddr": "10.0.0.3", 00:15:54.659 "adrfam": "ipv4", 00:15:54.659 "trsvcid": "4420", 00:15:54.659 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:54.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:54.659 "prchk_reftag": false, 00:15:54.659 "prchk_guard": false, 00:15:54.659 "hdgst": false, 00:15:54.659 "ddgst": false, 00:15:54.659 "dhchap_key": "key2", 00:15:54.659 "allow_unrecognized_csi": false, 00:15:54.659 "method": "bdev_nvme_attach_controller", 00:15:54.659 "req_id": 1 00:15:54.659 } 00:15:54.659 Got JSON-RPC error response 00:15:54.659 response: 00:15:54.659 { 00:15:54.659 "code": -5, 00:15:54.659 "message": "Input/output error" 00:15:54.659 } 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:54.659 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:54.660 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:55.265 request: 00:15:55.265 { 00:15:55.265 "name": "nvme0", 00:15:55.265 "trtype": "tcp", 00:15:55.265 "traddr": "10.0.0.3", 00:15:55.265 "adrfam": "ipv4", 00:15:55.265 "trsvcid": "4420", 00:15:55.265 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:55.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:55.265 "prchk_reftag": false, 00:15:55.265 "prchk_guard": false, 00:15:55.265 "hdgst": false, 00:15:55.265 "ddgst": false, 00:15:55.265 "dhchap_key": "key1", 00:15:55.265 "dhchap_ctrlr_key": "ckey2", 00:15:55.265 "allow_unrecognized_csi": false, 00:15:55.265 "method": "bdev_nvme_attach_controller", 00:15:55.265 "req_id": 1 00:15:55.265 } 00:15:55.265 Got JSON-RPC error response 00:15:55.265 response: 00:15:55.265 { 00:15:55.265 "code": -5, 00:15:55.265 "message": "Input/output error" 00:15:55.265 } 00:15:55.265 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:55.265 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:55.265 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:55.265 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:55.265 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:55.265 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.265 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.265 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.265 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 00:15:55.265 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.265 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.266 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.266 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.266 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:55.266 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.266 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:55.266 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.266 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:55.266 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.266 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.266 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.266 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.835 request: 00:15:55.835 { 00:15:55.835 "name": "nvme0", 00:15:55.835 "trtype": "tcp", 00:15:55.835 "traddr": "10.0.0.3", 00:15:55.835 "adrfam": "ipv4", 00:15:55.835 "trsvcid": "4420", 00:15:55.835 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:55.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:55.835 "prchk_reftag": false, 00:15:55.835 "prchk_guard": false, 00:15:55.835 "hdgst": false, 00:15:55.835 "ddgst": false, 00:15:55.835 "dhchap_key": "key1", 00:15:55.835 "dhchap_ctrlr_key": "ckey1", 00:15:55.835 "allow_unrecognized_csi": false, 00:15:55.835 "method": "bdev_nvme_attach_controller", 00:15:55.835 "req_id": 1 00:15:55.835 } 00:15:55.835 Got JSON-RPC error response 00:15:55.835 response: 00:15:55.835 { 00:15:55.835 "code": -5, 00:15:55.835 "message": "Input/output error" 00:15:55.835 } 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 70550 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70550 ']' 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70550 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70550 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.835 killing process with pid 70550 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70550' 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70550 00:15:55.835 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70550 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=73282 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 73282 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 73282 ']' 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.216 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.785 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.785 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:57.785 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:57.785 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:57.785 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.045 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.045 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:58.045 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 73282 00:15:58.045 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 73282 ']' 00:15:58.045 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.045 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.045 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.045 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.045 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.304 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.304 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:58.304 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:58.304 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.304 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 null0 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nGX 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.hdj ]] 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hdj 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cke 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.BR9 ]] 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BR9 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:58.564 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iKZ 00:15:58.565 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.565 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.565 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.565 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.n0S ]] 00:15:58.565 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n0S 00:15:58.565 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.565 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.pVV 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.824 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.393 nvme0n1 00:15:59.653 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.653 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.653 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.653 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.653 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.653 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.653 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.653 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.653 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.653 { 00:15:59.653 "cntlid": 1, 00:15:59.653 "qid": 0, 00:15:59.653 "state": "enabled", 00:15:59.653 "thread": "nvmf_tgt_poll_group_000", 00:15:59.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:15:59.653 "listen_address": { 00:15:59.653 "trtype": "TCP", 00:15:59.653 "adrfam": "IPv4", 00:15:59.653 "traddr": "10.0.0.3", 00:15:59.653 "trsvcid": "4420" 00:15:59.653 }, 00:15:59.653 "peer_address": { 00:15:59.653 "trtype": "TCP", 00:15:59.653 "adrfam": "IPv4", 00:15:59.653 "traddr": "10.0.0.1", 00:15:59.653 "trsvcid": "52016" 00:15:59.653 }, 00:15:59.653 "auth": { 00:15:59.653 "state": "completed", 00:15:59.653 "digest": "sha512", 00:15:59.653 "dhgroup": "ffdhe8192" 00:15:59.653 } 00:15:59.653 } 00:15:59.653 ]' 00:15:59.653 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.913 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.913 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.913 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:59.913 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.913 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.913 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.913 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.173 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:16:00.173 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:16:00.742 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.742 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:16:00.742 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.742 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.742 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.742 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key3 00:16:00.742 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.742 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.742 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.742 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:00.742 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:01.002 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:01.002 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:01.002 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:01.002 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:01.002 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.002 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:01.003 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.003 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:01.003 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.003 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.262 request: 00:16:01.262 { 00:16:01.262 "name": "nvme0", 00:16:01.262 "trtype": "tcp", 00:16:01.262 "traddr": "10.0.0.3", 00:16:01.262 "adrfam": "ipv4", 00:16:01.262 "trsvcid": "4420", 00:16:01.262 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:01.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:16:01.262 "prchk_reftag": false, 00:16:01.262 "prchk_guard": false, 00:16:01.262 "hdgst": false, 00:16:01.262 "ddgst": false, 00:16:01.262 "dhchap_key": "key3", 00:16:01.262 "allow_unrecognized_csi": false, 00:16:01.262 "method": "bdev_nvme_attach_controller", 00:16:01.262 "req_id": 1 00:16:01.262 } 00:16:01.262 Got JSON-RPC error response 00:16:01.262 response: 00:16:01.262 { 00:16:01.262 "code": -5, 00:16:01.262 "message": "Input/output error" 00:16:01.262 } 00:16:01.262 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:01.262 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:01.262 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:01.262 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:01.262 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:01.262 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:01.262 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:01.262 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:01.262 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:01.262 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:01.262 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:01.262 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.521 request: 00:16:01.521 { 00:16:01.521 "name": "nvme0", 00:16:01.521 "trtype": "tcp", 00:16:01.521 "traddr": "10.0.0.3", 00:16:01.521 "adrfam": "ipv4", 00:16:01.521 "trsvcid": "4420", 00:16:01.521 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:01.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:16:01.521 "prchk_reftag": false, 00:16:01.521 "prchk_guard": false, 00:16:01.521 "hdgst": false, 00:16:01.521 "ddgst": false, 00:16:01.521 "dhchap_key": "key3", 00:16:01.521 "allow_unrecognized_csi": false, 00:16:01.521 "method": "bdev_nvme_attach_controller", 00:16:01.521 "req_id": 1 00:16:01.521 } 00:16:01.521 Got JSON-RPC error response 00:16:01.521 response: 00:16:01.521 { 00:16:01.521 "code": -5, 00:16:01.521 "message": "Input/output error" 00:16:01.521 } 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:01.521 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:01.781 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:02.350 request: 00:16:02.350 { 00:16:02.350 "name": "nvme0", 00:16:02.350 "trtype": "tcp", 00:16:02.350 "traddr": "10.0.0.3", 00:16:02.350 "adrfam": "ipv4", 00:16:02.350 "trsvcid": "4420", 00:16:02.350 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:02.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:16:02.350 "prchk_reftag": false, 00:16:02.350 "prchk_guard": false, 00:16:02.350 "hdgst": false, 00:16:02.350 "ddgst": false, 00:16:02.350 "dhchap_key": "key0", 00:16:02.350 "dhchap_ctrlr_key": "key1", 00:16:02.350 "allow_unrecognized_csi": false, 00:16:02.350 "method": "bdev_nvme_attach_controller", 00:16:02.350 "req_id": 1 00:16:02.350 } 00:16:02.350 Got JSON-RPC error response 00:16:02.350 response: 00:16:02.350 { 00:16:02.350 "code": -5, 00:16:02.350 "message": "Input/output error" 00:16:02.350 } 00:16:02.350 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:02.350 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:02.350 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:02.350 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:02.350 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:02.350 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:02.350 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:02.350 nvme0n1 00:16:02.610 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:02.610 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.610 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:02.610 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.610 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.610 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.870 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 00:16:02.870 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.870 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.870 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.870 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:02.870 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:02.870 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:03.808 nvme0n1 00:16:03.808 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:03.808 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:03.808 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.808 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.808 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:03.808 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.808 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.808 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.808 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:03.808 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:03.808 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.067 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.067 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:16:04.067 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid 94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -l 0 --dhchap-secret DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: --dhchap-ctrl-secret DHHC-1:03:MGUxYjlmYmQwMjk1ODU4NmZkZTgwYmFlOGJjM2ExNzExNGFkOGRjNjQ4MmZmMzViZjEyZjU4MzJhZDU2YThjNMOBzfI=: 00:16:04.635 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:04.635 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:04.635 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:04.635 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:04.635 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:04.635 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:04.635 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:04.635 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.635 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.894 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:04.894 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:04.894 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:04.894 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:04.894 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.894 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:04.894 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.894 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:04.894 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:04.894 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:05.463 request: 00:16:05.463 { 00:16:05.463 "name": "nvme0", 00:16:05.463 "trtype": "tcp", 00:16:05.463 "traddr": "10.0.0.3", 00:16:05.463 "adrfam": "ipv4", 00:16:05.463 "trsvcid": "4420", 00:16:05.463 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:05.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8", 00:16:05.463 "prchk_reftag": false, 00:16:05.464 "prchk_guard": false, 00:16:05.464 "hdgst": false, 00:16:05.464 "ddgst": false, 00:16:05.464 "dhchap_key": "key1", 00:16:05.464 "allow_unrecognized_csi": false, 00:16:05.464 "method": "bdev_nvme_attach_controller", 00:16:05.464 "req_id": 1 00:16:05.464 } 00:16:05.464 Got JSON-RPC error response 00:16:05.464 response: 00:16:05.464 { 00:16:05.464 "code": -5, 00:16:05.464 "message": "Input/output error" 00:16:05.464 } 00:16:05.464 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:05.464 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:05.464 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:05.464 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:05.464 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:05.464 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:05.464 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:06.402 nvme0n1 00:16:06.402 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:06.402 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.402 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:06.402 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.402 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.402 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.662 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:16:06.662 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.662 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.662 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.662 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:06.662 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:06.662 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:06.922 nvme0n1 00:16:06.922 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:06.922 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:06.922 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.181 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.181 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.181 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: '' 2s 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: ]] 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTViNDE2YmYzMTRjNmIzZTJjNjdkOWViZTEyNTVkMTnpmctl: 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:07.441 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:09.346 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:09.346 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:09.346 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:09.346 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:09.346 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: 2s 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: ]] 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmZiM2I1ZTk1ZmFlODRlYjAyOTA5YTA4OGEyYjViYjg0ODRjNjkyODliYjdlOTk3d7gIFw==: 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:09.347 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:11.881 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:11.881 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:11.881 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:11.881 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:11.881 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:11.881 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:11.881 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:11.881 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.881 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:11.881 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.881 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.882 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.882 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:11.882 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:11.882 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:12.448 nvme0n1 00:16:12.448 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:12.448 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.448 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.448 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.449 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:12.449 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:13.015 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:13.015 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.015 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:13.015 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.015 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:16:13.015 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.015 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.015 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.015 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:13.015 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:13.273 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:13.273 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.273 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:13.532 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:14.099 request: 00:16:14.099 { 00:16:14.099 "name": "nvme0", 00:16:14.099 "dhchap_key": "key1", 00:16:14.099 "dhchap_ctrlr_key": "key3", 00:16:14.099 "method": "bdev_nvme_set_keys", 00:16:14.100 "req_id": 1 00:16:14.100 } 00:16:14.100 Got JSON-RPC error response 00:16:14.100 response: 00:16:14.100 { 00:16:14.100 "code": -13, 00:16:14.100 "message": "Permission denied" 00:16:14.100 } 00:16:14.100 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:14.100 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:14.100 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:14.100 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:14.100 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:14.100 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:14.100 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.361 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:14.361 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:15.297 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:15.297 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:15.297 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.557 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:15.557 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:15.557 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.557 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.557 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.557 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:15.557 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:15.557 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:16.125 nvme0n1 00:16:16.384 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:16.384 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.384 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.384 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.384 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:16.384 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:16.384 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:16.384 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:16.384 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.384 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:16.385 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.385 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:16.385 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:16.643 request: 00:16:16.643 { 00:16:16.643 "name": "nvme0", 00:16:16.643 "dhchap_key": "key2", 00:16:16.643 "dhchap_ctrlr_key": "key0", 00:16:16.643 "method": "bdev_nvme_set_keys", 00:16:16.643 "req_id": 1 00:16:16.643 } 00:16:16.643 Got JSON-RPC error response 00:16:16.643 response: 00:16:16.643 { 00:16:16.643 "code": -13, 00:16:16.643 "message": "Permission denied" 00:16:16.643 } 00:16:16.903 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:16.903 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:16.903 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:16.903 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:16.903 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:16.903 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:16.903 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.903 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:16.903 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:18.282 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:18.282 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.282 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 70582 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70582 ']' 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70582 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70582 00:16:18.282 killing process with pid 70582 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70582' 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70582 00:16:18.282 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70582 00:16:20.817 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:20.818 rmmod nvme_tcp 00:16:20.818 rmmod nvme_fabrics 00:16:20.818 rmmod nvme_keyring 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 73282 ']' 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 73282 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 73282 ']' 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 73282 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73282 00:16:20.818 killing process with pid 73282 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73282' 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 73282 00:16:20.818 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 73282 00:16:21.757 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:21.757 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:21.757 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:21.757 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:16:21.757 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:16:21.757 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:21.757 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:21.757 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:21.757 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:21.757 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:22.017 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:22.017 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:22.017 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.017 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:22.017 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:22.017 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:22.017 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:22.017 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:22.017 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:22.017 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:22.017 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.017 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.017 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:22.017 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.017 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.017 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.277 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:16:22.277 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.nGX /tmp/spdk.key-sha256.cke /tmp/spdk.key-sha384.iKZ /tmp/spdk.key-sha512.pVV /tmp/spdk.key-sha512.hdj /tmp/spdk.key-sha384.BR9 /tmp/spdk.key-sha256.n0S '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:16:22.277 00:16:22.277 real 2m40.020s 00:16:22.277 user 6m2.201s 00:16:22.277 sys 0m33.816s 00:16:22.277 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.277 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.277 ************************************ 00:16:22.277 END TEST nvmf_auth_target 00:16:22.277 ************************************ 00:16:22.277 08:16:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:22.277 08:16:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:22.277 08:16:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:22.277 08:16:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.277 08:16:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:22.277 ************************************ 00:16:22.277 START TEST nvmf_bdevio_no_huge 00:16:22.277 ************************************ 00:16:22.277 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:22.538 * Looking for test storage... 00:16:22.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:22.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.538 --rc genhtml_branch_coverage=1 00:16:22.538 --rc genhtml_function_coverage=1 00:16:22.538 --rc genhtml_legend=1 00:16:22.538 --rc geninfo_all_blocks=1 00:16:22.538 --rc geninfo_unexecuted_blocks=1 00:16:22.538 00:16:22.538 ' 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:22.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.538 --rc genhtml_branch_coverage=1 00:16:22.538 --rc genhtml_function_coverage=1 00:16:22.538 --rc genhtml_legend=1 00:16:22.538 --rc geninfo_all_blocks=1 00:16:22.538 --rc geninfo_unexecuted_blocks=1 00:16:22.538 00:16:22.538 ' 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:22.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.538 --rc genhtml_branch_coverage=1 00:16:22.538 --rc genhtml_function_coverage=1 00:16:22.538 --rc genhtml_legend=1 00:16:22.538 --rc geninfo_all_blocks=1 00:16:22.538 --rc geninfo_unexecuted_blocks=1 00:16:22.538 00:16:22.538 ' 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:22.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.538 --rc genhtml_branch_coverage=1 00:16:22.538 --rc genhtml_function_coverage=1 00:16:22.538 --rc genhtml_legend=1 00:16:22.538 --rc geninfo_all_blocks=1 00:16:22.538 --rc geninfo_unexecuted_blocks=1 00:16:22.538 00:16:22.538 ' 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.538 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:22.539 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:22.539 Cannot find device "nvmf_init_br" 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:22.539 Cannot find device "nvmf_init_br2" 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:22.539 Cannot find device "nvmf_tgt_br" 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.539 Cannot find device "nvmf_tgt_br2" 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:22.539 Cannot find device "nvmf_init_br" 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:16:22.539 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:22.799 Cannot find device "nvmf_init_br2" 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:22.799 Cannot find device "nvmf_tgt_br" 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:22.799 Cannot find device "nvmf_tgt_br2" 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:22.799 Cannot find device "nvmf_br" 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:22.799 Cannot find device "nvmf_init_if" 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:22.799 Cannot find device "nvmf_init_if2" 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:22.799 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:23.058 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:23.058 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:23.058 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:23.058 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:23.058 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.058 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.058 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.058 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:23.059 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.059 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:16:23.059 00:16:23.059 --- 10.0.0.3 ping statistics --- 00:16:23.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.059 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:23.059 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:23.059 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:16:23.059 00:16:23.059 --- 10.0.0.4 ping statistics --- 00:16:23.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.059 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:23.059 00:16:23.059 --- 10.0.0.1 ping statistics --- 00:16:23.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.059 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:23.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:16:23.059 00:16:23.059 --- 10.0.0.2 ping statistics --- 00:16:23.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.059 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:23.059 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=73937 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 73937 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 73937 ']' 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.059 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:23.318 [2024-11-17 08:16:28.138416] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:23.318 [2024-11-17 08:16:28.138927] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:23.581 [2024-11-17 08:16:28.340765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.581 [2024-11-17 08:16:28.481507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.581 [2024-11-17 08:16:28.481568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.581 [2024-11-17 08:16:28.481582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.581 [2024-11-17 08:16:28.481613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.581 [2024-11-17 08:16:28.481624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.581 [2024-11-17 08:16:28.483560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:23.581 [2024-11-17 08:16:28.483741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:16:23.581 [2024-11-17 08:16:28.483916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:16:23.581 [2024-11-17 08:16:28.483929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.840 [2024-11-17 08:16:28.654190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:24.100 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.100 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:16:24.100 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:24.100 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:24.100 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:24.100 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.100 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:24.100 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.100 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:24.100 [2024-11-17 08:16:29.014403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.100 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.100 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:24.100 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.100 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:24.100 Malloc0 00:16:24.100 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.100 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:24.100 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.100 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:24.360 [2024-11-17 08:16:29.125474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:24.360 { 00:16:24.360 "params": { 00:16:24.360 "name": "Nvme$subsystem", 00:16:24.360 "trtype": "$TEST_TRANSPORT", 00:16:24.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:24.360 "adrfam": "ipv4", 00:16:24.360 "trsvcid": "$NVMF_PORT", 00:16:24.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:24.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:24.360 "hdgst": ${hdgst:-false}, 00:16:24.360 "ddgst": ${ddgst:-false} 00:16:24.360 }, 00:16:24.360 "method": "bdev_nvme_attach_controller" 00:16:24.360 } 00:16:24.360 EOF 00:16:24.360 )") 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:16:24.360 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:24.360 "params": { 00:16:24.360 "name": "Nvme1", 00:16:24.360 "trtype": "tcp", 00:16:24.360 "traddr": "10.0.0.3", 00:16:24.360 "adrfam": "ipv4", 00:16:24.360 "trsvcid": "4420", 00:16:24.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:24.360 "hdgst": false, 00:16:24.360 "ddgst": false 00:16:24.360 }, 00:16:24.360 "method": "bdev_nvme_attach_controller" 00:16:24.360 }' 00:16:24.360 [2024-11-17 08:16:29.225684] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:24.360 [2024-11-17 08:16:29.225796] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid73972 ] 00:16:24.626 [2024-11-17 08:16:29.423612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:24.626 [2024-11-17 08:16:29.569399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.626 [2024-11-17 08:16:29.569546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.626 [2024-11-17 08:16:29.569583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.892 [2024-11-17 08:16:29.756167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.169 I/O targets: 00:16:25.169 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:25.169 00:16:25.169 00:16:25.169 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.169 http://cunit.sourceforge.net/ 00:16:25.169 00:16:25.169 00:16:25.169 Suite: bdevio tests on: Nvme1n1 00:16:25.169 Test: blockdev write read block ...passed 00:16:25.169 Test: blockdev write zeroes read block ...passed 00:16:25.169 Test: blockdev write zeroes read no split ...passed 00:16:25.169 Test: blockdev write zeroes read split ...passed 00:16:25.169 Test: blockdev write zeroes read split partial ...passed 00:16:25.169 Test: blockdev reset ...[2024-11-17 08:16:30.171817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:25.169 [2024-11-17 08:16:30.172229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:16:25.443 [2024-11-17 08:16:30.187314] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:16:25.443 passed 00:16:25.443 Test: blockdev write read 8 blocks ...passed 00:16:25.443 Test: blockdev write read size > 128k ...passed 00:16:25.443 Test: blockdev write read invalid size ...passed 00:16:25.443 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:25.443 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:25.443 Test: blockdev write read max offset ...passed 00:16:25.443 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:25.443 Test: blockdev writev readv 8 blocks ...passed 00:16:25.443 Test: blockdev writev readv 30 x 1block ...passed 00:16:25.443 Test: blockdev writev readv block ...passed 00:16:25.443 Test: blockdev writev readv size > 128k ...passed 00:16:25.443 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:25.443 Test: blockdev comparev and writev ...[2024-11-17 08:16:30.198395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.443 [2024-11-17 08:16:30.198461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.443 [2024-11-17 08:16:30.198488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.443 [2024-11-17 08:16:30.198508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:25.443 [2024-11-17 08:16:30.198974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.443 [2024-11-17 08:16:30.199148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:25.443 [2024-11-17 08:16:30.199180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.443 [2024-11-17 08:16:30.199197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:25.443 [2024-11-17 08:16:30.199590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.443 [2024-11-17 08:16:30.199619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:25.443 [2024-11-17 08:16:30.199639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.443 [2024-11-17 08:16:30.199662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:25.443 [2024-11-17 08:16:30.200009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.443 [2024-11-17 08:16:30.200036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:25.443 [2024-11-17 08:16:30.200054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.443 [2024-11-17 08:16:30.200072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:25.443 passed 00:16:25.443 Test: blockdev nvme passthru rw ...passed 00:16:25.443 Test: blockdev nvme passthru vendor specific ...[2024-11-17 08:16:30.200962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.443 [2024-11-17 08:16:30.201005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:25.443 [2024-11-17 08:16:30.201124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.443 [2024-11-17 08:16:30.201144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:25.443 [2024-11-17 08:16:30.201249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.443 [2024-11-17 08:16:30.201269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:25.443 [2024-11-17 08:16:30.201370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.443 [2024-11-17 08:16:30.201391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:25.443 passed 00:16:25.443 Test: blockdev nvme admin passthru ...passed 00:16:25.443 Test: blockdev copy ...passed 00:16:25.443 00:16:25.443 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.443 suites 1 1 n/a 0 0 00:16:25.443 tests 23 23 23 0 0 00:16:25.443 asserts 152 152 152 0 n/a 00:16:25.443 00:16:25.443 Elapsed time = 0.257 seconds 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:26.380 rmmod nvme_tcp 00:16:26.380 rmmod nvme_fabrics 00:16:26.380 rmmod nvme_keyring 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 73937 ']' 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 73937 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 73937 ']' 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 73937 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73937 00:16:26.380 killing process with pid 73937 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73937' 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 73937 00:16:26.380 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 73937 00:16:27.319 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:27.319 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:27.319 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:27.319 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:16:27.319 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:16:27.319 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:16:27.319 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:16:27.319 00:16:27.319 real 0m5.146s 00:16:27.319 user 0m17.032s 00:16:27.319 sys 0m2.080s 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.319 ************************************ 00:16:27.319 END TEST nvmf_bdevio_no_huge 00:16:27.319 ************************************ 00:16:27.319 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.578 08:16:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:27.578 08:16:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:27.578 08:16:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.578 08:16:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:27.578 ************************************ 00:16:27.578 START TEST nvmf_tls 00:16:27.578 ************************************ 00:16:27.578 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:27.578 * Looking for test storage... 00:16:27.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:27.578 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:27.578 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:16:27.578 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:27.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.839 --rc genhtml_branch_coverage=1 00:16:27.839 --rc genhtml_function_coverage=1 00:16:27.839 --rc genhtml_legend=1 00:16:27.839 --rc geninfo_all_blocks=1 00:16:27.839 --rc geninfo_unexecuted_blocks=1 00:16:27.839 00:16:27.839 ' 00:16:27.839 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:27.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.840 --rc genhtml_branch_coverage=1 00:16:27.840 --rc genhtml_function_coverage=1 00:16:27.840 --rc genhtml_legend=1 00:16:27.840 --rc geninfo_all_blocks=1 00:16:27.840 --rc geninfo_unexecuted_blocks=1 00:16:27.840 00:16:27.840 ' 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:27.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.840 --rc genhtml_branch_coverage=1 00:16:27.840 --rc genhtml_function_coverage=1 00:16:27.840 --rc genhtml_legend=1 00:16:27.840 --rc geninfo_all_blocks=1 00:16:27.840 --rc geninfo_unexecuted_blocks=1 00:16:27.840 00:16:27.840 ' 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:27.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.840 --rc genhtml_branch_coverage=1 00:16:27.840 --rc genhtml_function_coverage=1 00:16:27.840 --rc genhtml_legend=1 00:16:27.840 --rc geninfo_all_blocks=1 00:16:27.840 --rc geninfo_unexecuted_blocks=1 00:16:27.840 00:16:27.840 ' 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:27.840 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:27.840 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:27.840 Cannot find device "nvmf_init_br" 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:27.841 Cannot find device "nvmf_init_br2" 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:27.841 Cannot find device "nvmf_tgt_br" 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.841 Cannot find device "nvmf_tgt_br2" 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:27.841 Cannot find device "nvmf_init_br" 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:27.841 Cannot find device "nvmf_init_br2" 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:27.841 Cannot find device "nvmf_tgt_br" 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:16:27.841 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:27.841 Cannot find device "nvmf_tgt_br2" 00:16:28.100 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:16:28.100 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:28.100 Cannot find device "nvmf_br" 00:16:28.100 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:28.101 Cannot find device "nvmf_init_if" 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:28.101 Cannot find device "nvmf_init_if2" 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:28.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:28.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:28.101 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:28.101 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:28.361 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:28.361 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:16:28.361 00:16:28.361 --- 10.0.0.3 ping statistics --- 00:16:28.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.361 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:28.361 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:28.361 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.103 ms 00:16:28.361 00:16:28.361 --- 10.0.0.4 ping statistics --- 00:16:28.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.361 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:28.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:16:28.361 00:16:28.361 --- 10.0.0.1 ping statistics --- 00:16:28.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.361 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:28.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:28.361 00:16:28.361 --- 10.0.0.2 ping statistics --- 00:16:28.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.361 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74253 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74253 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74253 ']' 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.361 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.621 [2024-11-17 08:16:33.431299] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:28.621 [2024-11-17 08:16:33.431939] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.621 [2024-11-17 08:16:33.620743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.880 [2024-11-17 08:16:33.728370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.880 [2024-11-17 08:16:33.728425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.880 [2024-11-17 08:16:33.728450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.880 [2024-11-17 08:16:33.728470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.880 [2024-11-17 08:16:33.728483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.880 [2024-11-17 08:16:33.729671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.450 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.450 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:29.450 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:29.450 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:29.450 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:29.450 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.450 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:16:29.450 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:29.709 true 00:16:29.709 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:29.709 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:16:29.709 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:16:29.709 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:16:29.709 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:29.969 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:29.969 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:16:30.228 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:16:30.228 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:16:30.228 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:30.487 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:16:30.487 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:30.487 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:16:30.487 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:16:30.487 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:30.487 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:16:30.747 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:16:30.747 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:16:30.747 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:31.006 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:31.006 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:31.265 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:16:31.265 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:16:31.265 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:31.265 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:31.265 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:31.524 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:16:31.524 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:31.525 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:31.784 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:31.784 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:31.784 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.IOKwJ3dia6 00:16:31.784 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:16:31.784 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.D5BrlyeIpd 00:16:31.784 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:31.784 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:31.784 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.IOKwJ3dia6 00:16:31.784 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.D5BrlyeIpd 00:16:31.784 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:31.784 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:32.352 [2024-11-17 08:16:37.155605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:32.352 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.IOKwJ3dia6 00:16:32.352 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.IOKwJ3dia6 00:16:32.352 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:32.611 [2024-11-17 08:16:37.466189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.611 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:32.870 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:32.870 [2024-11-17 08:16:37.837683] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:32.870 [2024-11-17 08:16:37.837971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:32.870 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:33.129 malloc0 00:16:33.129 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:33.388 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.IOKwJ3dia6 00:16:33.647 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:33.907 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.IOKwJ3dia6 00:16:46.121 Initializing NVMe Controllers 00:16:46.121 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:46.121 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:46.122 Initialization complete. Launching workers. 00:16:46.122 ======================================================== 00:16:46.122 Latency(us) 00:16:46.122 Device Information : IOPS MiB/s Average min max 00:16:46.122 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10877.70 42.49 5884.74 1505.73 7777.61 00:16:46.122 ======================================================== 00:16:46.122 Total : 10877.70 42.49 5884.74 1505.73 7777.61 00:16:46.122 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IOKwJ3dia6 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IOKwJ3dia6 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74483 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74483 /var/tmp/bdevperf.sock 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74483 ']' 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:46.122 [2024-11-17 08:16:49.137394] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:46.122 [2024-11-17 08:16:49.137554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74483 ] 00:16:46.122 [2024-11-17 08:16:49.320572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.122 [2024-11-17 08:16:49.446328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.122 [2024-11-17 08:16:49.686408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:46.122 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IOKwJ3dia6 00:16:46.122 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:46.122 [2024-11-17 08:16:50.327497] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:46.122 TLSTESTn1 00:16:46.122 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:46.122 Running I/O for 10 seconds... 00:16:47.625 4582.00 IOPS, 17.90 MiB/s [2024-11-17T08:16:53.573Z] 4585.00 IOPS, 17.91 MiB/s [2024-11-17T08:16:54.507Z] 4587.00 IOPS, 17.92 MiB/s [2024-11-17T08:16:55.881Z] 4589.25 IOPS, 17.93 MiB/s [2024-11-17T08:16:56.816Z] 4594.40 IOPS, 17.95 MiB/s [2024-11-17T08:16:57.750Z] 4600.33 IOPS, 17.97 MiB/s [2024-11-17T08:16:58.685Z] 4599.43 IOPS, 17.97 MiB/s [2024-11-17T08:16:59.619Z] 4602.88 IOPS, 17.98 MiB/s [2024-11-17T08:17:00.554Z] 4600.44 IOPS, 17.97 MiB/s [2024-11-17T08:17:00.554Z] 4597.30 IOPS, 17.96 MiB/s 00:16:55.542 Latency(us) 00:16:55.542 [2024-11-17T08:17:00.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.543 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:55.543 Verification LBA range: start 0x0 length 0x2000 00:16:55.543 TLSTESTn1 : 10.02 4602.96 17.98 0.00 0.00 27763.94 5948.25 21266.30 00:16:55.543 [2024-11-17T08:17:00.555Z] =================================================================================================================== 00:16:55.543 [2024-11-17T08:17:00.555Z] Total : 4602.96 17.98 0.00 0.00 27763.94 5948.25 21266.30 00:16:55.543 { 00:16:55.543 "results": [ 00:16:55.543 { 00:16:55.543 "job": "TLSTESTn1", 00:16:55.543 "core_mask": "0x4", 00:16:55.543 "workload": "verify", 00:16:55.543 "status": "finished", 00:16:55.543 "verify_range": { 00:16:55.543 "start": 0, 00:16:55.543 "length": 8192 00:16:55.543 }, 00:16:55.543 "queue_depth": 128, 00:16:55.543 "io_size": 4096, 00:16:55.543 "runtime": 10.015075, 00:16:55.543 "iops": 4602.961036237872, 00:16:55.543 "mibps": 17.980316547804186, 00:16:55.543 "io_failed": 0, 00:16:55.543 "io_timeout": 0, 00:16:55.543 "avg_latency_us": 27763.936185741688, 00:16:55.543 "min_latency_us": 5948.2473895582325, 00:16:55.543 "max_latency_us": 21266.300401606426 00:16:55.543 } 00:16:55.543 ], 00:16:55.543 "core_count": 1 00:16:55.543 } 00:16:55.543 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:55.543 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74483 00:16:55.543 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74483 ']' 00:16:55.543 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74483 00:16:55.543 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:55.543 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.543 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74483 00:16:55.801 killing process with pid 74483 00:16:55.801 Received shutdown signal, test time was about 10.000000 seconds 00:16:55.801 00:16:55.801 Latency(us) 00:16:55.801 [2024-11-17T08:17:00.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.801 [2024-11-17T08:17:00.813Z] =================================================================================================================== 00:16:55.801 [2024-11-17T08:17:00.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.801 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:55.801 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:55.801 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74483' 00:16:55.801 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74483 00:16:55.801 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74483 00:16:57.176 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D5BrlyeIpd 00:16:57.176 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:57.176 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D5BrlyeIpd 00:16:57.176 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:57.176 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.176 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:57.176 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.176 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D5BrlyeIpd 00:16:57.176 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.D5BrlyeIpd 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74624 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74624 /var/tmp/bdevperf.sock 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74624 ']' 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.177 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.177 [2024-11-17 08:17:01.917798] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:57.177 [2024-11-17 08:17:01.917919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74624 ] 00:16:57.177 [2024-11-17 08:17:02.102363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.435 [2024-11-17 08:17:02.232254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.694 [2024-11-17 08:17:02.464392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:57.952 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.952 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:57.952 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D5BrlyeIpd 00:16:57.952 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:58.210 [2024-11-17 08:17:03.140542] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:58.210 [2024-11-17 08:17:03.150929] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:58.210 [2024-11-17 08:17:03.151584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:58.210 [2024-11-17 08:17:03.152532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:58.210 [2024-11-17 08:17:03.153529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:58.210 [2024-11-17 08:17:03.153568] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:58.210 [2024-11-17 08:17:03.153590] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:58.210 [2024-11-17 08:17:03.153613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:58.210 request: 00:16:58.210 { 00:16:58.210 "name": "TLSTEST", 00:16:58.210 "trtype": "tcp", 00:16:58.210 "traddr": "10.0.0.3", 00:16:58.210 "adrfam": "ipv4", 00:16:58.210 "trsvcid": "4420", 00:16:58.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.210 "prchk_reftag": false, 00:16:58.210 "prchk_guard": false, 00:16:58.210 "hdgst": false, 00:16:58.210 "ddgst": false, 00:16:58.210 "psk": "key0", 00:16:58.210 "allow_unrecognized_csi": false, 00:16:58.210 "method": "bdev_nvme_attach_controller", 00:16:58.210 "req_id": 1 00:16:58.210 } 00:16:58.210 Got JSON-RPC error response 00:16:58.210 response: 00:16:58.210 { 00:16:58.210 "code": -5, 00:16:58.210 "message": "Input/output error" 00:16:58.210 } 00:16:58.210 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74624 00:16:58.210 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74624 ']' 00:16:58.210 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74624 00:16:58.210 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:58.210 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.210 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74624 00:16:58.468 killing process with pid 74624 00:16:58.468 Received shutdown signal, test time was about 10.000000 seconds 00:16:58.468 00:16:58.468 Latency(us) 00:16:58.468 [2024-11-17T08:17:03.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.468 [2024-11-17T08:17:03.480Z] =================================================================================================================== 00:16:58.468 [2024-11-17T08:17:03.480Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:58.468 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:58.468 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:58.468 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74624' 00:16:58.468 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74624 00:16:58.468 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74624 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IOKwJ3dia6 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IOKwJ3dia6 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.IOKwJ3dia6 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IOKwJ3dia6 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74669 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74669 /var/tmp/bdevperf.sock 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74669 ']' 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.405 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.405 [2024-11-17 08:17:04.393207] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:59.405 [2024-11-17 08:17:04.393343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74669 ] 00:16:59.664 [2024-11-17 08:17:04.577204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.922 [2024-11-17 08:17:04.701954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.181 [2024-11-17 08:17:04.935485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:00.440 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.440 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:00.440 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IOKwJ3dia6 00:17:00.440 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:00.699 [2024-11-17 08:17:05.599779] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:00.699 [2024-11-17 08:17:05.608496] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:00.699 [2024-11-17 08:17:05.608544] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:00.699 [2024-11-17 08:17:05.608607] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:00.699 [2024-11-17 08:17:05.608739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:00.699 [2024-11-17 08:17:05.609706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:00.699 [2024-11-17 08:17:05.610703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:00.699 [2024-11-17 08:17:05.610745] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:00.699 [2024-11-17 08:17:05.610771] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:00.699 [2024-11-17 08:17:05.610793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:00.699 request: 00:17:00.699 { 00:17:00.699 "name": "TLSTEST", 00:17:00.699 "trtype": "tcp", 00:17:00.699 "traddr": "10.0.0.3", 00:17:00.699 "adrfam": "ipv4", 00:17:00.699 "trsvcid": "4420", 00:17:00.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.699 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:00.699 "prchk_reftag": false, 00:17:00.699 "prchk_guard": false, 00:17:00.699 "hdgst": false, 00:17:00.699 "ddgst": false, 00:17:00.699 "psk": "key0", 00:17:00.699 "allow_unrecognized_csi": false, 00:17:00.699 "method": "bdev_nvme_attach_controller", 00:17:00.699 "req_id": 1 00:17:00.699 } 00:17:00.699 Got JSON-RPC error response 00:17:00.699 response: 00:17:00.699 { 00:17:00.699 "code": -5, 00:17:00.699 "message": "Input/output error" 00:17:00.699 } 00:17:00.699 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74669 00:17:00.699 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74669 ']' 00:17:00.699 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74669 00:17:00.699 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:00.699 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.699 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74669 00:17:00.699 killing process with pid 74669 00:17:00.699 Received shutdown signal, test time was about 10.000000 seconds 00:17:00.699 00:17:00.699 Latency(us) 00:17:00.699 [2024-11-17T08:17:05.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.699 [2024-11-17T08:17:05.711Z] =================================================================================================================== 00:17:00.699 [2024-11-17T08:17:05.712Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:00.700 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:00.700 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:00.700 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74669' 00:17:00.700 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74669 00:17:00.700 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74669 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IOKwJ3dia6 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IOKwJ3dia6 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.IOKwJ3dia6 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IOKwJ3dia6 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74705 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74705 /var/tmp/bdevperf.sock 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74705 ']' 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.154 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.154 [2024-11-17 08:17:06.972615] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:02.154 [2024-11-17 08:17:06.972740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74705 ] 00:17:02.429 [2024-11-17 08:17:07.159910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.429 [2024-11-17 08:17:07.284463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.688 [2024-11-17 08:17:07.512409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:02.944 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.944 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:02.944 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IOKwJ3dia6 00:17:03.202 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:03.202 [2024-11-17 08:17:08.172720] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:03.202 [2024-11-17 08:17:08.182755] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:03.202 [2024-11-17 08:17:08.182799] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:03.202 [2024-11-17 08:17:08.182854] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:03.202 [2024-11-17 08:17:08.183625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:03.202 [2024-11-17 08:17:08.184584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:03.202 [2024-11-17 08:17:08.185582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:03.202 [2024-11-17 08:17:08.185623] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:03.202 [2024-11-17 08:17:08.185645] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:03.202 [2024-11-17 08:17:08.185666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:03.202 request: 00:17:03.202 { 00:17:03.202 "name": "TLSTEST", 00:17:03.202 "trtype": "tcp", 00:17:03.202 "traddr": "10.0.0.3", 00:17:03.202 "adrfam": "ipv4", 00:17:03.202 "trsvcid": "4420", 00:17:03.202 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:03.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:03.202 "prchk_reftag": false, 00:17:03.202 "prchk_guard": false, 00:17:03.202 "hdgst": false, 00:17:03.202 "ddgst": false, 00:17:03.202 "psk": "key0", 00:17:03.202 "allow_unrecognized_csi": false, 00:17:03.202 "method": "bdev_nvme_attach_controller", 00:17:03.202 "req_id": 1 00:17:03.202 } 00:17:03.202 Got JSON-RPC error response 00:17:03.202 response: 00:17:03.202 { 00:17:03.202 "code": -5, 00:17:03.202 "message": "Input/output error" 00:17:03.202 } 00:17:03.202 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74705 00:17:03.202 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74705 ']' 00:17:03.202 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74705 00:17:03.202 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:03.202 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.474 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74705 00:17:03.474 killing process with pid 74705 00:17:03.474 Received shutdown signal, test time was about 10.000000 seconds 00:17:03.474 00:17:03.474 Latency(us) 00:17:03.474 [2024-11-17T08:17:08.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.474 [2024-11-17T08:17:08.486Z] =================================================================================================================== 00:17:03.474 [2024-11-17T08:17:08.486Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:03.474 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:03.474 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:03.474 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74705' 00:17:03.474 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74705 00:17:03.474 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74705 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74746 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74746 /var/tmp/bdevperf.sock 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74746 ']' 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.406 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.664 [2024-11-17 08:17:09.423241] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:04.664 [2024-11-17 08:17:09.423853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74746 ] 00:17:04.664 [2024-11-17 08:17:09.605116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.921 [2024-11-17 08:17:09.732644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.180 [2024-11-17 08:17:09.967515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:05.438 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.438 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:05.438 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:05.438 [2024-11-17 08:17:10.432282] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:05.438 [2024-11-17 08:17:10.432332] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:05.438 request: 00:17:05.438 { 00:17:05.438 "name": "key0", 00:17:05.438 "path": "", 00:17:05.438 "method": "keyring_file_add_key", 00:17:05.438 "req_id": 1 00:17:05.438 } 00:17:05.438 Got JSON-RPC error response 00:17:05.438 response: 00:17:05.438 { 00:17:05.438 "code": -1, 00:17:05.438 "message": "Operation not permitted" 00:17:05.438 } 00:17:05.697 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:05.697 [2024-11-17 08:17:10.636190] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:05.697 [2024-11-17 08:17:10.636274] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:05.697 request: 00:17:05.697 { 00:17:05.697 "name": "TLSTEST", 00:17:05.697 "trtype": "tcp", 00:17:05.697 "traddr": "10.0.0.3", 00:17:05.697 "adrfam": "ipv4", 00:17:05.697 "trsvcid": "4420", 00:17:05.697 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.697 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.697 "prchk_reftag": false, 00:17:05.697 "prchk_guard": false, 00:17:05.697 "hdgst": false, 00:17:05.697 "ddgst": false, 00:17:05.697 "psk": "key0", 00:17:05.697 "allow_unrecognized_csi": false, 00:17:05.697 "method": "bdev_nvme_attach_controller", 00:17:05.697 "req_id": 1 00:17:05.697 } 00:17:05.697 Got JSON-RPC error response 00:17:05.697 response: 00:17:05.697 { 00:17:05.697 "code": -126, 00:17:05.697 "message": "Required key not available" 00:17:05.697 } 00:17:05.697 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74746 00:17:05.697 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74746 ']' 00:17:05.697 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74746 00:17:05.697 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:05.697 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.697 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74746 00:17:05.697 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:05.697 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:05.697 killing process with pid 74746 00:17:05.697 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74746' 00:17:05.697 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74746 00:17:05.697 Received shutdown signal, test time was about 10.000000 seconds 00:17:05.697 00:17:05.697 Latency(us) 00:17:05.697 [2024-11-17T08:17:10.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.697 [2024-11-17T08:17:10.709Z] =================================================================================================================== 00:17:05.697 [2024-11-17T08:17:10.709Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:05.697 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74746 00:17:07.074 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 74253 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74253 ']' 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74253 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74253 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:07.075 killing process with pid 74253 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74253' 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74253 00:17:07.075 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74253 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.qQ8tnfd2jw 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.qQ8tnfd2jw 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74803 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74803 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74803 ']' 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.453 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.453 [2024-11-17 08:17:13.341411] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:08.453 [2024-11-17 08:17:13.341568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.713 [2024-11-17 08:17:13.523782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.713 [2024-11-17 08:17:13.627604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.713 [2024-11-17 08:17:13.627663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.713 [2024-11-17 08:17:13.627678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.713 [2024-11-17 08:17:13.627697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.713 [2024-11-17 08:17:13.627710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.713 [2024-11-17 08:17:13.628927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.972 [2024-11-17 08:17:13.826735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:09.231 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.231 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:09.231 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:09.231 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:09.231 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.231 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.231 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.qQ8tnfd2jw 00:17:09.231 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qQ8tnfd2jw 00:17:09.231 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:09.490 [2024-11-17 08:17:14.402503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.490 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:09.750 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:10.009 [2024-11-17 08:17:14.814496] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:10.009 [2024-11-17 08:17:14.814774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:10.009 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:10.268 malloc0 00:17:10.268 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:10.528 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qQ8tnfd2jw 00:17:10.528 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qQ8tnfd2jw 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qQ8tnfd2jw 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74864 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74864 /var/tmp/bdevperf.sock 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74864 ']' 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.787 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.787 [2024-11-17 08:17:15.793893] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:10.787 [2024-11-17 08:17:15.794032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74864 ] 00:17:11.046 [2024-11-17 08:17:15.973978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.304 [2024-11-17 08:17:16.099436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.563 [2024-11-17 08:17:16.315720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:11.821 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.821 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:11.821 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qQ8tnfd2jw 00:17:11.821 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:12.079 [2024-11-17 08:17:17.005116] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.079 TLSTESTn1 00:17:12.337 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:12.337 Running I/O for 10 seconds... 00:17:14.206 4592.00 IOPS, 17.94 MiB/s [2024-11-17T08:17:20.592Z] 4620.50 IOPS, 18.05 MiB/s [2024-11-17T08:17:21.527Z] 4598.33 IOPS, 17.96 MiB/s [2024-11-17T08:17:22.462Z] 4594.50 IOPS, 17.95 MiB/s [2024-11-17T08:17:23.398Z] 4589.60 IOPS, 17.93 MiB/s [2024-11-17T08:17:24.334Z] 4586.17 IOPS, 17.91 MiB/s [2024-11-17T08:17:25.269Z] 4585.86 IOPS, 17.91 MiB/s [2024-11-17T08:17:26.203Z] 4584.75 IOPS, 17.91 MiB/s [2024-11-17T08:17:27.580Z] 4582.89 IOPS, 17.90 MiB/s [2024-11-17T08:17:27.580Z] 4582.10 IOPS, 17.90 MiB/s 00:17:22.568 Latency(us) 00:17:22.568 [2024-11-17T08:17:27.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.568 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:22.568 Verification LBA range: start 0x0 length 0x2000 00:17:22.568 TLSTESTn1 : 10.02 4587.09 17.92 0.00 0.00 27859.86 6158.80 21476.86 00:17:22.568 [2024-11-17T08:17:27.580Z] =================================================================================================================== 00:17:22.568 [2024-11-17T08:17:27.581Z] Total : 4587.09 17.92 0.00 0.00 27859.86 6158.80 21476.86 00:17:22.569 { 00:17:22.569 "results": [ 00:17:22.569 { 00:17:22.569 "job": "TLSTESTn1", 00:17:22.569 "core_mask": "0x4", 00:17:22.569 "workload": "verify", 00:17:22.569 "status": "finished", 00:17:22.569 "verify_range": { 00:17:22.569 "start": 0, 00:17:22.569 "length": 8192 00:17:22.569 }, 00:17:22.569 "queue_depth": 128, 00:17:22.569 "io_size": 4096, 00:17:22.569 "runtime": 10.01638, 00:17:22.569 "iops": 4587.0863525545155, 00:17:22.569 "mibps": 17.918306064666076, 00:17:22.569 "io_failed": 0, 00:17:22.569 "io_timeout": 0, 00:17:22.569 "avg_latency_us": 27859.863968965143, 00:17:22.569 "min_latency_us": 6158.804819277108, 00:17:22.569 "max_latency_us": 21476.857831325302 00:17:22.569 } 00:17:22.569 ], 00:17:22.569 "core_count": 1 00:17:22.569 } 00:17:22.569 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:22.569 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74864 00:17:22.569 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74864 ']' 00:17:22.569 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74864 00:17:22.569 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:22.569 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.569 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74864 00:17:22.569 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:22.569 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:22.569 killing process with pid 74864 00:17:22.569 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74864' 00:17:22.569 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74864 00:17:22.569 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.569 00:17:22.569 Latency(us) 00:17:22.569 [2024-11-17T08:17:27.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.569 [2024-11-17T08:17:27.581Z] =================================================================================================================== 00:17:22.569 [2024-11-17T08:17:27.581Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.569 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74864 00:17:23.504 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.qQ8tnfd2jw 00:17:23.504 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qQ8tnfd2jw 00:17:23.504 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:23.504 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qQ8tnfd2jw 00:17:23.504 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qQ8tnfd2jw 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qQ8tnfd2jw 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75006 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75006 /var/tmp/bdevperf.sock 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75006 ']' 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.505 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:23.505 [2024-11-17 08:17:28.470817] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:23.505 [2024-11-17 08:17:28.470957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75006 ] 00:17:23.764 [2024-11-17 08:17:28.656531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.023 [2024-11-17 08:17:28.784855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.023 [2024-11-17 08:17:29.008196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:24.588 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.588 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:24.588 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qQ8tnfd2jw 00:17:24.588 [2024-11-17 08:17:29.484241] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qQ8tnfd2jw': 0100666 00:17:24.588 [2024-11-17 08:17:29.484294] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:24.588 request: 00:17:24.588 { 00:17:24.588 "name": "key0", 00:17:24.588 "path": "/tmp/tmp.qQ8tnfd2jw", 00:17:24.588 "method": "keyring_file_add_key", 00:17:24.588 "req_id": 1 00:17:24.588 } 00:17:24.588 Got JSON-RPC error response 00:17:24.588 response: 00:17:24.588 { 00:17:24.588 "code": -1, 00:17:24.588 "message": "Operation not permitted" 00:17:24.588 } 00:17:24.588 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:24.847 [2024-11-17 08:17:29.703515] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:24.847 [2024-11-17 08:17:29.703586] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:24.847 request: 00:17:24.847 { 00:17:24.847 "name": "TLSTEST", 00:17:24.847 "trtype": "tcp", 00:17:24.847 "traddr": "10.0.0.3", 00:17:24.847 "adrfam": "ipv4", 00:17:24.847 "trsvcid": "4420", 00:17:24.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:24.847 "prchk_reftag": false, 00:17:24.847 "prchk_guard": false, 00:17:24.847 "hdgst": false, 00:17:24.847 "ddgst": false, 00:17:24.847 "psk": "key0", 00:17:24.847 "allow_unrecognized_csi": false, 00:17:24.847 "method": "bdev_nvme_attach_controller", 00:17:24.847 "req_id": 1 00:17:24.847 } 00:17:24.847 Got JSON-RPC error response 00:17:24.847 response: 00:17:24.847 { 00:17:24.847 "code": -126, 00:17:24.847 "message": "Required key not available" 00:17:24.847 } 00:17:24.847 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 75006 00:17:24.847 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75006 ']' 00:17:24.847 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75006 00:17:24.847 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:24.847 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.847 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75006 00:17:24.847 killing process with pid 75006 00:17:24.847 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.847 00:17:24.847 Latency(us) 00:17:24.847 [2024-11-17T08:17:29.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.847 [2024-11-17T08:17:29.859Z] =================================================================================================================== 00:17:24.847 [2024-11-17T08:17:29.859Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.847 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:24.847 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:24.847 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75006' 00:17:24.847 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75006 00:17:24.847 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75006 00:17:26.225 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:26.225 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:26.225 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.225 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.225 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.225 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 74803 00:17:26.225 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74803 ']' 00:17:26.225 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74803 00:17:26.225 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:26.225 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.225 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74803 00:17:26.225 killing process with pid 74803 00:17:26.225 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:26.225 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:26.225 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74803' 00:17:26.225 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74803 00:17:26.225 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74803 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75064 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75064 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75064 ']' 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.606 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.606 [2024-11-17 08:17:32.327221] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:27.606 [2024-11-17 08:17:32.327350] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.606 [2024-11-17 08:17:32.514903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.606 [2024-11-17 08:17:32.616367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.606 [2024-11-17 08:17:32.616421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.606 [2024-11-17 08:17:32.616437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.606 [2024-11-17 08:17:32.616467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.606 [2024-11-17 08:17:32.616480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.865 [2024-11-17 08:17:32.617492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.865 [2024-11-17 08:17:32.804154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:28.124 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.124 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:28.124 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.124 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.125 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.384 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.384 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.qQ8tnfd2jw 00:17:28.384 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:28.384 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.qQ8tnfd2jw 00:17:28.384 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:17:28.384 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.384 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:17:28.384 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.384 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.qQ8tnfd2jw 00:17:28.384 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qQ8tnfd2jw 00:17:28.384 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:28.384 [2024-11-17 08:17:33.375568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.384 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:28.643 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:28.903 [2024-11-17 08:17:33.794964] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:28.903 [2024-11-17 08:17:33.795248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:28.903 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:29.162 malloc0 00:17:29.162 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:29.421 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qQ8tnfd2jw 00:17:29.681 [2024-11-17 08:17:34.470944] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qQ8tnfd2jw': 0100666 00:17:29.681 [2024-11-17 08:17:34.470990] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:29.681 request: 00:17:29.681 { 00:17:29.681 "name": "key0", 00:17:29.681 "path": "/tmp/tmp.qQ8tnfd2jw", 00:17:29.681 "method": "keyring_file_add_key", 00:17:29.681 "req_id": 1 00:17:29.681 } 00:17:29.681 Got JSON-RPC error response 00:17:29.681 response: 00:17:29.681 { 00:17:29.681 "code": -1, 00:17:29.681 "message": "Operation not permitted" 00:17:29.681 } 00:17:29.681 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:29.681 [2024-11-17 08:17:34.682673] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:17:29.681 [2024-11-17 08:17:34.682727] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:29.681 request: 00:17:29.681 { 00:17:29.681 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.681 "host": "nqn.2016-06.io.spdk:host1", 00:17:29.681 "psk": "key0", 00:17:29.681 "method": "nvmf_subsystem_add_host", 00:17:29.681 "req_id": 1 00:17:29.681 } 00:17:29.681 Got JSON-RPC error response 00:17:29.681 response: 00:17:29.681 { 00:17:29.681 "code": -32603, 00:17:29.681 "message": "Internal error" 00:17:29.681 } 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 75064 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75064 ']' 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75064 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75064 00:17:29.941 killing process with pid 75064 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75064' 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75064 00:17:29.941 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75064 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.qQ8tnfd2jw 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75134 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75134 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75134 ']' 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.963 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.223 [2024-11-17 08:17:36.039967] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:31.223 [2024-11-17 08:17:36.040107] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.223 [2024-11-17 08:17:36.226257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.483 [2024-11-17 08:17:36.331281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.483 [2024-11-17 08:17:36.331337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.483 [2024-11-17 08:17:36.331353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.483 [2024-11-17 08:17:36.331374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.483 [2024-11-17 08:17:36.331386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.483 [2024-11-17 08:17:36.332400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.741 [2024-11-17 08:17:36.519368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:32.000 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.000 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:32.000 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:32.000 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:32.000 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.000 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.000 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.qQ8tnfd2jw 00:17:32.000 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qQ8tnfd2jw 00:17:32.000 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:32.259 [2024-11-17 08:17:37.082506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.259 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:32.518 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:32.518 [2024-11-17 08:17:37.502487] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:32.518 [2024-11-17 08:17:37.502897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:32.518 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:32.777 malloc0 00:17:32.777 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:33.037 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qQ8tnfd2jw 00:17:33.296 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:33.555 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=75190 00:17:33.555 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:33.555 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:33.555 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 75190 /var/tmp/bdevperf.sock 00:17:33.556 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75190 ']' 00:17:33.556 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.556 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.556 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.556 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.556 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.556 [2024-11-17 08:17:38.489419] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:33.556 [2024-11-17 08:17:38.489609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75190 ] 00:17:33.814 [2024-11-17 08:17:38.677697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.814 [2024-11-17 08:17:38.806541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.073 [2024-11-17 08:17:39.039816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:34.332 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.332 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:34.332 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qQ8tnfd2jw 00:17:34.590 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:34.849 [2024-11-17 08:17:39.705503] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:34.849 TLSTESTn1 00:17:34.849 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:35.108 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:17:35.108 "subsystems": [ 00:17:35.108 { 00:17:35.108 "subsystem": "keyring", 00:17:35.108 "config": [ 00:17:35.108 { 00:17:35.108 "method": "keyring_file_add_key", 00:17:35.108 "params": { 00:17:35.108 "name": "key0", 00:17:35.108 "path": "/tmp/tmp.qQ8tnfd2jw" 00:17:35.108 } 00:17:35.108 } 00:17:35.108 ] 00:17:35.108 }, 00:17:35.108 { 00:17:35.108 "subsystem": "iobuf", 00:17:35.108 "config": [ 00:17:35.108 { 00:17:35.108 "method": "iobuf_set_options", 00:17:35.108 "params": { 00:17:35.108 "small_pool_count": 8192, 00:17:35.108 "large_pool_count": 1024, 00:17:35.108 "small_bufsize": 8192, 00:17:35.108 "large_bufsize": 135168, 00:17:35.108 "enable_numa": false 00:17:35.108 } 00:17:35.108 } 00:17:35.108 ] 00:17:35.108 }, 00:17:35.108 { 00:17:35.108 "subsystem": "sock", 00:17:35.108 "config": [ 00:17:35.108 { 00:17:35.108 "method": "sock_set_default_impl", 00:17:35.108 "params": { 00:17:35.108 "impl_name": "uring" 00:17:35.108 } 00:17:35.108 }, 00:17:35.108 { 00:17:35.108 "method": "sock_impl_set_options", 00:17:35.108 "params": { 00:17:35.108 "impl_name": "ssl", 00:17:35.108 "recv_buf_size": 4096, 00:17:35.108 "send_buf_size": 4096, 00:17:35.108 "enable_recv_pipe": true, 00:17:35.108 "enable_quickack": false, 00:17:35.108 "enable_placement_id": 0, 00:17:35.108 "enable_zerocopy_send_server": true, 00:17:35.108 "enable_zerocopy_send_client": false, 00:17:35.108 "zerocopy_threshold": 0, 00:17:35.108 "tls_version": 0, 00:17:35.108 "enable_ktls": false 00:17:35.108 } 00:17:35.108 }, 00:17:35.108 { 00:17:35.108 "method": "sock_impl_set_options", 00:17:35.108 "params": { 00:17:35.108 "impl_name": "posix", 00:17:35.108 "recv_buf_size": 2097152, 00:17:35.108 "send_buf_size": 2097152, 00:17:35.108 "enable_recv_pipe": true, 00:17:35.108 "enable_quickack": false, 00:17:35.108 "enable_placement_id": 0, 00:17:35.108 "enable_zerocopy_send_server": true, 00:17:35.108 "enable_zerocopy_send_client": false, 00:17:35.108 "zerocopy_threshold": 0, 00:17:35.108 "tls_version": 0, 00:17:35.108 "enable_ktls": false 00:17:35.108 } 00:17:35.108 }, 00:17:35.108 { 00:17:35.108 "method": "sock_impl_set_options", 00:17:35.108 "params": { 00:17:35.108 "impl_name": "uring", 00:17:35.108 "recv_buf_size": 2097152, 00:17:35.108 "send_buf_size": 2097152, 00:17:35.108 "enable_recv_pipe": true, 00:17:35.108 "enable_quickack": false, 00:17:35.108 "enable_placement_id": 0, 00:17:35.108 "enable_zerocopy_send_server": false, 00:17:35.108 "enable_zerocopy_send_client": false, 00:17:35.108 "zerocopy_threshold": 0, 00:17:35.108 "tls_version": 0, 00:17:35.108 "enable_ktls": false 00:17:35.108 } 00:17:35.108 } 00:17:35.108 ] 00:17:35.108 }, 00:17:35.108 { 00:17:35.108 "subsystem": "vmd", 00:17:35.108 "config": [] 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "subsystem": "accel", 00:17:35.109 "config": [ 00:17:35.109 { 00:17:35.109 "method": "accel_set_options", 00:17:35.109 "params": { 00:17:35.109 "small_cache_size": 128, 00:17:35.109 "large_cache_size": 16, 00:17:35.109 "task_count": 2048, 00:17:35.109 "sequence_count": 2048, 00:17:35.109 "buf_count": 2048 00:17:35.109 } 00:17:35.109 } 00:17:35.109 ] 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "subsystem": "bdev", 00:17:35.109 "config": [ 00:17:35.109 { 00:17:35.109 "method": "bdev_set_options", 00:17:35.109 "params": { 00:17:35.109 "bdev_io_pool_size": 65535, 00:17:35.109 "bdev_io_cache_size": 256, 00:17:35.109 "bdev_auto_examine": true, 00:17:35.109 "iobuf_small_cache_size": 128, 00:17:35.109 "iobuf_large_cache_size": 16 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "bdev_raid_set_options", 00:17:35.109 "params": { 00:17:35.109 "process_window_size_kb": 1024, 00:17:35.109 "process_max_bandwidth_mb_sec": 0 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "bdev_iscsi_set_options", 00:17:35.109 "params": { 00:17:35.109 "timeout_sec": 30 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "bdev_nvme_set_options", 00:17:35.109 "params": { 00:17:35.109 "action_on_timeout": "none", 00:17:35.109 "timeout_us": 0, 00:17:35.109 "timeout_admin_us": 0, 00:17:35.109 "keep_alive_timeout_ms": 10000, 00:17:35.109 "arbitration_burst": 0, 00:17:35.109 "low_priority_weight": 0, 00:17:35.109 "medium_priority_weight": 0, 00:17:35.109 "high_priority_weight": 0, 00:17:35.109 "nvme_adminq_poll_period_us": 10000, 00:17:35.109 "nvme_ioq_poll_period_us": 0, 00:17:35.109 "io_queue_requests": 0, 00:17:35.109 "delay_cmd_submit": true, 00:17:35.109 "transport_retry_count": 4, 00:17:35.109 "bdev_retry_count": 3, 00:17:35.109 "transport_ack_timeout": 0, 00:17:35.109 "ctrlr_loss_timeout_sec": 0, 00:17:35.109 "reconnect_delay_sec": 0, 00:17:35.109 "fast_io_fail_timeout_sec": 0, 00:17:35.109 "disable_auto_failback": false, 00:17:35.109 "generate_uuids": false, 00:17:35.109 "transport_tos": 0, 00:17:35.109 "nvme_error_stat": false, 00:17:35.109 "rdma_srq_size": 0, 00:17:35.109 "io_path_stat": false, 00:17:35.109 "allow_accel_sequence": false, 00:17:35.109 "rdma_max_cq_size": 0, 00:17:35.109 "rdma_cm_event_timeout_ms": 0, 00:17:35.109 "dhchap_digests": [ 00:17:35.109 "sha256", 00:17:35.109 "sha384", 00:17:35.109 "sha512" 00:17:35.109 ], 00:17:35.109 "dhchap_dhgroups": [ 00:17:35.109 "null", 00:17:35.109 "ffdhe2048", 00:17:35.109 "ffdhe3072", 00:17:35.109 "ffdhe4096", 00:17:35.109 "ffdhe6144", 00:17:35.109 "ffdhe8192" 00:17:35.109 ] 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "bdev_nvme_set_hotplug", 00:17:35.109 "params": { 00:17:35.109 "period_us": 100000, 00:17:35.109 "enable": false 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "bdev_malloc_create", 00:17:35.109 "params": { 00:17:35.109 "name": "malloc0", 00:17:35.109 "num_blocks": 8192, 00:17:35.109 "block_size": 4096, 00:17:35.109 "physical_block_size": 4096, 00:17:35.109 "uuid": "b1f70869-3683-4952-aa97-57ac5d91c793", 00:17:35.109 "optimal_io_boundary": 0, 00:17:35.109 "md_size": 0, 00:17:35.109 "dif_type": 0, 00:17:35.109 "dif_is_head_of_md": false, 00:17:35.109 "dif_pi_format": 0 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "bdev_wait_for_examine" 00:17:35.109 } 00:17:35.109 ] 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "subsystem": "nbd", 00:17:35.109 "config": [] 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "subsystem": "scheduler", 00:17:35.109 "config": [ 00:17:35.109 { 00:17:35.109 "method": "framework_set_scheduler", 00:17:35.109 "params": { 00:17:35.109 "name": "static" 00:17:35.109 } 00:17:35.109 } 00:17:35.109 ] 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "subsystem": "nvmf", 00:17:35.109 "config": [ 00:17:35.109 { 00:17:35.109 "method": "nvmf_set_config", 00:17:35.109 "params": { 00:17:35.109 "discovery_filter": "match_any", 00:17:35.109 "admin_cmd_passthru": { 00:17:35.109 "identify_ctrlr": false 00:17:35.109 }, 00:17:35.109 "dhchap_digests": [ 00:17:35.109 "sha256", 00:17:35.109 "sha384", 00:17:35.109 "sha512" 00:17:35.109 ], 00:17:35.109 "dhchap_dhgroups": [ 00:17:35.109 "null", 00:17:35.109 "ffdhe2048", 00:17:35.109 "ffdhe3072", 00:17:35.109 "ffdhe4096", 00:17:35.109 "ffdhe6144", 00:17:35.109 "ffdhe8192" 00:17:35.109 ] 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "nvmf_set_max_subsystems", 00:17:35.109 "params": { 00:17:35.109 "max_subsystems": 1024 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "nvmf_set_crdt", 00:17:35.109 "params": { 00:17:35.109 "crdt1": 0, 00:17:35.109 "crdt2": 0, 00:17:35.109 "crdt3": 0 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "nvmf_create_transport", 00:17:35.109 "params": { 00:17:35.109 "trtype": "TCP", 00:17:35.109 "max_queue_depth": 128, 00:17:35.109 "max_io_qpairs_per_ctrlr": 127, 00:17:35.109 "in_capsule_data_size": 4096, 00:17:35.109 "max_io_size": 131072, 00:17:35.109 "io_unit_size": 131072, 00:17:35.109 "max_aq_depth": 128, 00:17:35.109 "num_shared_buffers": 511, 00:17:35.109 "buf_cache_size": 4294967295, 00:17:35.109 "dif_insert_or_strip": false, 00:17:35.109 "zcopy": false, 00:17:35.109 "c2h_success": false, 00:17:35.109 "sock_priority": 0, 00:17:35.109 "abort_timeout_sec": 1, 00:17:35.109 "ack_timeout": 0, 00:17:35.109 "data_wr_pool_size": 0 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "nvmf_create_subsystem", 00:17:35.109 "params": { 00:17:35.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.109 "allow_any_host": false, 00:17:35.109 "serial_number": "SPDK00000000000001", 00:17:35.109 "model_number": "SPDK bdev Controller", 00:17:35.109 "max_namespaces": 10, 00:17:35.109 "min_cntlid": 1, 00:17:35.109 "max_cntlid": 65519, 00:17:35.109 "ana_reporting": false 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "nvmf_subsystem_add_host", 00:17:35.109 "params": { 00:17:35.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.109 "host": "nqn.2016-06.io.spdk:host1", 00:17:35.109 "psk": "key0" 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "nvmf_subsystem_add_ns", 00:17:35.109 "params": { 00:17:35.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.109 "namespace": { 00:17:35.109 "nsid": 1, 00:17:35.109 "bdev_name": "malloc0", 00:17:35.109 "nguid": "B1F7086936834952AA9757AC5D91C793", 00:17:35.109 "uuid": "b1f70869-3683-4952-aa97-57ac5d91c793", 00:17:35.109 "no_auto_visible": false 00:17:35.109 } 00:17:35.109 } 00:17:35.109 }, 00:17:35.109 { 00:17:35.109 "method": "nvmf_subsystem_add_listener", 00:17:35.109 "params": { 00:17:35.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.109 "listen_address": { 00:17:35.109 "trtype": "TCP", 00:17:35.109 "adrfam": "IPv4", 00:17:35.109 "traddr": "10.0.0.3", 00:17:35.109 "trsvcid": "4420" 00:17:35.109 }, 00:17:35.109 "secure_channel": true 00:17:35.109 } 00:17:35.109 } 00:17:35.109 ] 00:17:35.109 } 00:17:35.109 ] 00:17:35.110 }' 00:17:35.110 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:35.368 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:17:35.368 "subsystems": [ 00:17:35.368 { 00:17:35.368 "subsystem": "keyring", 00:17:35.368 "config": [ 00:17:35.369 { 00:17:35.369 "method": "keyring_file_add_key", 00:17:35.369 "params": { 00:17:35.369 "name": "key0", 00:17:35.369 "path": "/tmp/tmp.qQ8tnfd2jw" 00:17:35.369 } 00:17:35.369 } 00:17:35.369 ] 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "subsystem": "iobuf", 00:17:35.369 "config": [ 00:17:35.369 { 00:17:35.369 "method": "iobuf_set_options", 00:17:35.369 "params": { 00:17:35.369 "small_pool_count": 8192, 00:17:35.369 "large_pool_count": 1024, 00:17:35.369 "small_bufsize": 8192, 00:17:35.369 "large_bufsize": 135168, 00:17:35.369 "enable_numa": false 00:17:35.369 } 00:17:35.369 } 00:17:35.369 ] 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "subsystem": "sock", 00:17:35.369 "config": [ 00:17:35.369 { 00:17:35.369 "method": "sock_set_default_impl", 00:17:35.369 "params": { 00:17:35.369 "impl_name": "uring" 00:17:35.369 } 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "method": "sock_impl_set_options", 00:17:35.369 "params": { 00:17:35.369 "impl_name": "ssl", 00:17:35.369 "recv_buf_size": 4096, 00:17:35.369 "send_buf_size": 4096, 00:17:35.369 "enable_recv_pipe": true, 00:17:35.369 "enable_quickack": false, 00:17:35.369 "enable_placement_id": 0, 00:17:35.369 "enable_zerocopy_send_server": true, 00:17:35.369 "enable_zerocopy_send_client": false, 00:17:35.369 "zerocopy_threshold": 0, 00:17:35.369 "tls_version": 0, 00:17:35.369 "enable_ktls": false 00:17:35.369 } 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "method": "sock_impl_set_options", 00:17:35.369 "params": { 00:17:35.369 "impl_name": "posix", 00:17:35.369 "recv_buf_size": 2097152, 00:17:35.369 "send_buf_size": 2097152, 00:17:35.369 "enable_recv_pipe": true, 00:17:35.369 "enable_quickack": false, 00:17:35.369 "enable_placement_id": 0, 00:17:35.369 "enable_zerocopy_send_server": true, 00:17:35.369 "enable_zerocopy_send_client": false, 00:17:35.369 "zerocopy_threshold": 0, 00:17:35.369 "tls_version": 0, 00:17:35.369 "enable_ktls": false 00:17:35.369 } 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "method": "sock_impl_set_options", 00:17:35.369 "params": { 00:17:35.369 "impl_name": "uring", 00:17:35.369 "recv_buf_size": 2097152, 00:17:35.369 "send_buf_size": 2097152, 00:17:35.369 "enable_recv_pipe": true, 00:17:35.369 "enable_quickack": false, 00:17:35.369 "enable_placement_id": 0, 00:17:35.369 "enable_zerocopy_send_server": false, 00:17:35.369 "enable_zerocopy_send_client": false, 00:17:35.369 "zerocopy_threshold": 0, 00:17:35.369 "tls_version": 0, 00:17:35.369 "enable_ktls": false 00:17:35.369 } 00:17:35.369 } 00:17:35.369 ] 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "subsystem": "vmd", 00:17:35.369 "config": [] 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "subsystem": "accel", 00:17:35.369 "config": [ 00:17:35.369 { 00:17:35.369 "method": "accel_set_options", 00:17:35.369 "params": { 00:17:35.369 "small_cache_size": 128, 00:17:35.369 "large_cache_size": 16, 00:17:35.369 "task_count": 2048, 00:17:35.369 "sequence_count": 2048, 00:17:35.369 "buf_count": 2048 00:17:35.369 } 00:17:35.369 } 00:17:35.369 ] 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "subsystem": "bdev", 00:17:35.369 "config": [ 00:17:35.369 { 00:17:35.369 "method": "bdev_set_options", 00:17:35.369 "params": { 00:17:35.369 "bdev_io_pool_size": 65535, 00:17:35.369 "bdev_io_cache_size": 256, 00:17:35.369 "bdev_auto_examine": true, 00:17:35.369 "iobuf_small_cache_size": 128, 00:17:35.369 "iobuf_large_cache_size": 16 00:17:35.369 } 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "method": "bdev_raid_set_options", 00:17:35.369 "params": { 00:17:35.369 "process_window_size_kb": 1024, 00:17:35.369 "process_max_bandwidth_mb_sec": 0 00:17:35.369 } 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "method": "bdev_iscsi_set_options", 00:17:35.369 "params": { 00:17:35.369 "timeout_sec": 30 00:17:35.369 } 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "method": "bdev_nvme_set_options", 00:17:35.369 "params": { 00:17:35.369 "action_on_timeout": "none", 00:17:35.369 "timeout_us": 0, 00:17:35.369 "timeout_admin_us": 0, 00:17:35.369 "keep_alive_timeout_ms": 10000, 00:17:35.369 "arbitration_burst": 0, 00:17:35.369 "low_priority_weight": 0, 00:17:35.369 "medium_priority_weight": 0, 00:17:35.369 "high_priority_weight": 0, 00:17:35.369 "nvme_adminq_poll_period_us": 10000, 00:17:35.369 "nvme_ioq_poll_period_us": 0, 00:17:35.369 "io_queue_requests": 512, 00:17:35.369 "delay_cmd_submit": true, 00:17:35.369 "transport_retry_count": 4, 00:17:35.369 "bdev_retry_count": 3, 00:17:35.369 "transport_ack_timeout": 0, 00:17:35.369 "ctrlr_loss_timeout_sec": 0, 00:17:35.369 "reconnect_delay_sec": 0, 00:17:35.369 "fast_io_fail_timeout_sec": 0, 00:17:35.369 "disable_auto_failback": false, 00:17:35.369 "generate_uuids": false, 00:17:35.369 "transport_tos": 0, 00:17:35.369 "nvme_error_stat": false, 00:17:35.369 "rdma_srq_size": 0, 00:17:35.369 "io_path_stat": false, 00:17:35.369 "allow_accel_sequence": false, 00:17:35.369 "rdma_max_cq_size": 0, 00:17:35.369 "rdma_cm_event_timeout_ms": 0, 00:17:35.369 "dhchap_digests": [ 00:17:35.369 "sha256", 00:17:35.369 "sha384", 00:17:35.369 "sha512" 00:17:35.369 ], 00:17:35.369 "dhchap_dhgroups": [ 00:17:35.369 "null", 00:17:35.369 "ffdhe2048", 00:17:35.369 "ffdhe3072", 00:17:35.369 "ffdhe4096", 00:17:35.369 "ffdhe6144", 00:17:35.369 "ffdhe8192" 00:17:35.369 ] 00:17:35.369 } 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "method": "bdev_nvme_attach_controller", 00:17:35.369 "params": { 00:17:35.369 "name": "TLSTEST", 00:17:35.369 "trtype": "TCP", 00:17:35.369 "adrfam": "IPv4", 00:17:35.369 "traddr": "10.0.0.3", 00:17:35.369 "trsvcid": "4420", 00:17:35.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.369 "prchk_reftag": false, 00:17:35.369 "prchk_guard": false, 00:17:35.369 "ctrlr_loss_timeout_sec": 0, 00:17:35.369 "reconnect_delay_sec": 0, 00:17:35.369 "fast_io_fail_timeout_sec": 0, 00:17:35.369 "psk": "key0", 00:17:35.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.369 "hdgst": false, 00:17:35.369 "ddgst": false, 00:17:35.369 "multipath": "multipath" 00:17:35.369 } 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "method": "bdev_nvme_set_hotplug", 00:17:35.369 "params": { 00:17:35.369 "period_us": 100000, 00:17:35.369 "enable": false 00:17:35.369 } 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "method": "bdev_wait_for_examine" 00:17:35.369 } 00:17:35.369 ] 00:17:35.369 }, 00:17:35.369 { 00:17:35.369 "subsystem": "nbd", 00:17:35.369 "config": [] 00:17:35.369 } 00:17:35.369 ] 00:17:35.369 }' 00:17:35.369 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 75190 00:17:35.369 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75190 ']' 00:17:35.369 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75190 00:17:35.369 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:35.369 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.369 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75190 00:17:35.628 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:35.628 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:35.628 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75190' 00:17:35.628 killing process with pid 75190 00:17:35.628 Received shutdown signal, test time was about 10.000000 seconds 00:17:35.628 00:17:35.628 Latency(us) 00:17:35.628 [2024-11-17T08:17:40.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.628 [2024-11-17T08:17:40.640Z] =================================================================================================================== 00:17:35.628 [2024-11-17T08:17:40.640Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:35.628 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75190 00:17:35.628 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75190 00:17:36.565 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 75134 00:17:36.565 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75134 ']' 00:17:36.565 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75134 00:17:36.565 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:36.565 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.565 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75134 00:17:36.565 killing process with pid 75134 00:17:36.565 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:36.565 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:36.565 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75134' 00:17:36.565 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75134 00:17:36.565 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75134 00:17:37.946 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:37.946 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.946 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.946 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:17:37.946 "subsystems": [ 00:17:37.946 { 00:17:37.946 "subsystem": "keyring", 00:17:37.946 "config": [ 00:17:37.946 { 00:17:37.946 "method": "keyring_file_add_key", 00:17:37.946 "params": { 00:17:37.946 "name": "key0", 00:17:37.946 "path": "/tmp/tmp.qQ8tnfd2jw" 00:17:37.946 } 00:17:37.946 } 00:17:37.946 ] 00:17:37.946 }, 00:17:37.946 { 00:17:37.946 "subsystem": "iobuf", 00:17:37.946 "config": [ 00:17:37.946 { 00:17:37.946 "method": "iobuf_set_options", 00:17:37.946 "params": { 00:17:37.946 "small_pool_count": 8192, 00:17:37.946 "large_pool_count": 1024, 00:17:37.946 "small_bufsize": 8192, 00:17:37.946 "large_bufsize": 135168, 00:17:37.946 "enable_numa": false 00:17:37.946 } 00:17:37.946 } 00:17:37.946 ] 00:17:37.946 }, 00:17:37.946 { 00:17:37.946 "subsystem": "sock", 00:17:37.946 "config": [ 00:17:37.946 { 00:17:37.946 "method": "sock_set_default_impl", 00:17:37.946 "params": { 00:17:37.946 "impl_name": "uring" 00:17:37.946 } 00:17:37.946 }, 00:17:37.946 { 00:17:37.946 "method": "sock_impl_set_options", 00:17:37.946 "params": { 00:17:37.946 "impl_name": "ssl", 00:17:37.946 "recv_buf_size": 4096, 00:17:37.946 "send_buf_size": 4096, 00:17:37.946 "enable_recv_pipe": true, 00:17:37.946 "enable_quickack": false, 00:17:37.946 "enable_placement_id": 0, 00:17:37.946 "enable_zerocopy_send_server": true, 00:17:37.946 "enable_zerocopy_send_client": false, 00:17:37.946 "zerocopy_threshold": 0, 00:17:37.946 "tls_version": 0, 00:17:37.946 "enable_ktls": false 00:17:37.946 } 00:17:37.946 }, 00:17:37.946 { 00:17:37.946 "method": "sock_impl_set_options", 00:17:37.946 "params": { 00:17:37.946 "impl_name": "posix", 00:17:37.946 "recv_buf_size": 2097152, 00:17:37.946 "send_buf_size": 2097152, 00:17:37.947 "enable_recv_pipe": true, 00:17:37.947 "enable_quickack": false, 00:17:37.947 "enable_placement_id": 0, 00:17:37.947 "enable_zerocopy_send_server": true, 00:17:37.947 "enable_zerocopy_send_client": false, 00:17:37.947 "zerocopy_threshold": 0, 00:17:37.947 "tls_version": 0, 00:17:37.947 "enable_ktls": false 00:17:37.947 } 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "method": "sock_impl_set_options", 00:17:37.947 "params": { 00:17:37.947 "impl_name": "uring", 00:17:37.947 "recv_buf_size": 2097152, 00:17:37.947 "send_buf_size": 2097152, 00:17:37.947 "enable_recv_pipe": true, 00:17:37.947 "enable_quickack": false, 00:17:37.947 "enable_placement_id": 0, 00:17:37.947 "enable_zerocopy_send_server": false, 00:17:37.947 "enable_zerocopy_send_client": false, 00:17:37.947 "zerocopy_threshold": 0, 00:17:37.947 "tls_version": 0, 00:17:37.947 "enable_ktls": false 00:17:37.947 } 00:17:37.947 } 00:17:37.947 ] 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "subsystem": "vmd", 00:17:37.947 "config": [] 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "subsystem": "accel", 00:17:37.947 "config": [ 00:17:37.947 { 00:17:37.947 "method": "accel_set_options", 00:17:37.947 "params": { 00:17:37.947 "small_cache_size": 128, 00:17:37.947 "large_cache_size": 16, 00:17:37.947 "task_count": 2048, 00:17:37.947 "sequence_count": 2048, 00:17:37.947 "buf_count": 2048 00:17:37.947 } 00:17:37.947 } 00:17:37.947 ] 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "subsystem": "bdev", 00:17:37.947 "config": [ 00:17:37.947 { 00:17:37.947 "method": "bdev_set_options", 00:17:37.947 "params": { 00:17:37.947 "bdev_io_pool_size": 65535, 00:17:37.947 "bdev_io_cache_size": 256, 00:17:37.947 "bdev_auto_examine": true, 00:17:37.947 "iobuf_small_cache_size": 128, 00:17:37.947 "iobuf_large_cache_size": 16 00:17:37.947 } 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "method": "bdev_raid_set_options", 00:17:37.947 "params": { 00:17:37.947 "process_window_size_kb": 1024, 00:17:37.947 "process_max_bandwidth_mb_sec": 0 00:17:37.947 } 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "method": "bdev_iscsi_set_options", 00:17:37.947 "params": { 00:17:37.947 "timeout_sec": 30 00:17:37.947 } 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "method": "bdev_nvme_set_options", 00:17:37.947 "params": { 00:17:37.947 "action_on_timeout": "none", 00:17:37.947 "timeout_us": 0, 00:17:37.947 "timeout_admin_us": 0, 00:17:37.947 "keep_alive_timeout_ms": 10000, 00:17:37.947 "arbitration_burst": 0, 00:17:37.947 "low_priority_weight": 0, 00:17:37.947 "medium_priority_weight": 0, 00:17:37.947 "high_priority_weight": 0, 00:17:37.947 "nvme_adminq_poll_period_us": 10000, 00:17:37.947 "nvme_ioq_poll_period_us": 0, 00:17:37.947 "io_queue_requests": 0, 00:17:37.947 "delay_cmd_submit": true, 00:17:37.947 "transport_retry_count": 4, 00:17:37.947 "bdev_retry_count": 3, 00:17:37.947 "transport_ack_timeout": 0, 00:17:37.947 "ctrlr_loss_timeout_sec": 0, 00:17:37.947 "reconnect_delay_sec": 0, 00:17:37.947 "fast_io_fail_timeout_sec": 0, 00:17:37.947 "disable_auto_failback": false, 00:17:37.947 "generate_uuids": false, 00:17:37.947 "transport_tos": 0, 00:17:37.947 "nvme_error_stat": false, 00:17:37.947 "rdma_srq_size": 0, 00:17:37.947 "io_path_stat": false, 00:17:37.947 "allow_accel_sequence": false, 00:17:37.947 "rdma_max_cq_size": 0, 00:17:37.947 "rdma_cm_event_timeout_ms": 0, 00:17:37.947 "dhchap_digests": [ 00:17:37.947 "sha256", 00:17:37.947 "sha384", 00:17:37.947 "sha512" 00:17:37.947 ], 00:17:37.947 "dhchap_dhgroups": [ 00:17:37.947 "null", 00:17:37.947 "ffdhe2048", 00:17:37.947 "ffdhe3072", 00:17:37.947 "ffdhe4096", 00:17:37.947 "ffdhe6144", 00:17:37.947 "ffdhe8192" 00:17:37.947 ] 00:17:37.947 } 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "method": "bdev_nvme_set_hotplug", 00:17:37.947 "params": { 00:17:37.947 "period_us": 100000, 00:17:37.947 "enable": false 00:17:37.947 } 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "method": "bdev_malloc_create", 00:17:37.947 "params": { 00:17:37.947 "name": "malloc0", 00:17:37.947 "num_blocks": 8192, 00:17:37.947 "block_size": 4096, 00:17:37.947 "physical_block_size": 4096, 00:17:37.947 "uuid": "b1f70869-3683-4952-aa97-57ac5d91c793", 00:17:37.947 "optimal_io_boundary": 0, 00:17:37.947 "md_size": 0, 00:17:37.947 "dif_type": 0, 00:17:37.947 "dif_is_head_of_md": false, 00:17:37.947 "dif_pi_format": 0 00:17:37.947 } 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "method": "bdev_wait_for_examine" 00:17:37.947 } 00:17:37.947 ] 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "subsystem": "nbd", 00:17:37.947 "config": [] 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "subsystem": "scheduler", 00:17:37.947 "config": [ 00:17:37.947 { 00:17:37.947 "method": "framework_set_scheduler", 00:17:37.947 "params": { 00:17:37.947 "name": "static" 00:17:37.947 } 00:17:37.947 } 00:17:37.947 ] 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "subsystem": "nvmf", 00:17:37.947 "config": [ 00:17:37.947 { 00:17:37.947 "method": "nvmf_set_config", 00:17:37.947 "params": { 00:17:37.947 "discovery_filter": "match_any", 00:17:37.947 "admin_cmd_passthru": { 00:17:37.947 "identify_ctrlr": false 00:17:37.947 }, 00:17:37.947 "dhchap_digests": [ 00:17:37.947 "sha256", 00:17:37.947 "sha384", 00:17:37.947 "sha512" 00:17:37.947 ], 00:17:37.947 "dhchap_dhgroups": [ 00:17:37.947 "null", 00:17:37.947 "ffdhe2048", 00:17:37.947 "ffdhe3072", 00:17:37.947 "ffdhe4096", 00:17:37.947 "ffdhe6144", 00:17:37.947 "ffdhe8192" 00:17:37.947 ] 00:17:37.947 } 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "method": "nvmf_set_max_subsystems", 00:17:37.947 "params": { 00:17:37.947 "max_subsystems": 1024 00:17:37.947 } 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "method": "nvmf_set_crdt", 00:17:37.947 "params": { 00:17:37.947 "crdt1": 0, 00:17:37.947 "crdt2": 0, 00:17:37.947 "crdt3": 0 00:17:37.947 } 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "method": "nvmf_create_transport", 00:17:37.947 "params": { 00:17:37.947 "trtype": "TCP", 00:17:37.947 "max_queue_depth": 128, 00:17:37.947 "max_io_qpairs_per_ctrlr": 127, 00:17:37.947 "in_capsule_data_size": 4096, 00:17:37.947 "max_io_size": 131072, 00:17:37.947 "io_unit_size": 131072, 00:17:37.947 "max_aq_depth": 128, 00:17:37.947 "num_shared_buffers": 511, 00:17:37.947 "buf_cache_size": 4294967295, 00:17:37.947 "dif_insert_or_strip": false, 00:17:37.947 "zcopy": false, 00:17:37.947 "c2h_success": false, 00:17:37.947 "sock_priority": 0, 00:17:37.947 "abort_timeout_sec": 1, 00:17:37.947 "ack_timeout": 0, 00:17:37.947 "data_wr_pool_size": 0 00:17:37.947 } 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "method": "nvmf_create_subsystem", 00:17:37.947 "params": { 00:17:37.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.947 "allow_any_host": false, 00:17:37.947 "serial_number": "SPDK00000000000001", 00:17:37.947 "model_number": "SPDK bdev Controller", 00:17:37.947 "max_namespaces": 10, 00:17:37.947 "min_cntlid": 1, 00:17:37.947 "max_cntlid": 65519, 00:17:37.947 "ana_reporting": false 00:17:37.947 } 00:17:37.947 }, 00:17:37.947 { 00:17:37.947 "method": "nvmf_subsystem_add_host", 00:17:37.947 "params": { 00:17:37.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.947 "host": "nqn.2016-06.io.spdk:host1", 00:17:37.948 "psk": "key0" 00:17:37.948 } 00:17:37.948 }, 00:17:37.948 { 00:17:37.948 "method": "nvmf_subsystem_add_ns", 00:17:37.948 "params": { 00:17:37.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.948 "namespace": { 00:17:37.948 "nsid": 1, 00:17:37.948 "bdev_name": "malloc0", 00:17:37.948 "nguid": "B1F7086936834952AA9757AC5D91C793", 00:17:37.948 "uuid": "b1f70869-3683-4952-aa97-57ac5d91c793", 00:17:37.948 "no_auto_visible": false 00:17:37.948 } 00:17:37.948 } 00:17:37.948 }, 00:17:37.948 { 00:17:37.948 "method": "nvmf_subsystem_add_listener", 00:17:37.948 "params": { 00:17:37.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.948 "listen_address": { 00:17:37.948 "trtype": "TCP", 00:17:37.948 "adrfam": "IPv4", 00:17:37.948 "traddr": "10.0.0.3", 00:17:37.948 "trsvcid": "4420" 00:17:37.948 }, 00:17:37.948 "secure_channel": true 00:17:37.948 } 00:17:37.948 } 00:17:37.948 ] 00:17:37.948 } 00:17:37.948 ] 00:17:37.948 }' 00:17:37.948 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.948 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75258 00:17:37.948 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:37.948 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75258 00:17:37.948 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75258 ']' 00:17:37.948 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.948 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.948 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.948 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.948 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.948 [2024-11-17 08:17:42.821994] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:37.948 [2024-11-17 08:17:42.822366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.207 [2024-11-17 08:17:43.008609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.207 [2024-11-17 08:17:43.115813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.207 [2024-11-17 08:17:43.115865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.207 [2024-11-17 08:17:43.115881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.207 [2024-11-17 08:17:43.115901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.207 [2024-11-17 08:17:43.115913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.207 [2024-11-17 08:17:43.117013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.467 [2024-11-17 08:17:43.404565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.727 [2024-11-17 08:17:43.584750] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.728 [2024-11-17 08:17:43.616653] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:38.728 [2024-11-17 08:17:43.616905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=75290 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 75290 /var/tmp/bdevperf.sock 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75290 ']' 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:17:38.728 "subsystems": [ 00:17:38.728 { 00:17:38.728 "subsystem": "keyring", 00:17:38.728 "config": [ 00:17:38.728 { 00:17:38.728 "method": "keyring_file_add_key", 00:17:38.728 "params": { 00:17:38.728 "name": "key0", 00:17:38.728 "path": "/tmp/tmp.qQ8tnfd2jw" 00:17:38.728 } 00:17:38.728 } 00:17:38.728 ] 00:17:38.728 }, 00:17:38.728 { 00:17:38.728 "subsystem": "iobuf", 00:17:38.728 "config": [ 00:17:38.728 { 00:17:38.728 "method": "iobuf_set_options", 00:17:38.728 "params": { 00:17:38.728 "small_pool_count": 8192, 00:17:38.728 "large_pool_count": 1024, 00:17:38.728 "small_bufsize": 8192, 00:17:38.728 "large_bufsize": 135168, 00:17:38.728 "enable_numa": false 00:17:38.728 } 00:17:38.728 } 00:17:38.728 ] 00:17:38.728 }, 00:17:38.728 { 00:17:38.728 "subsystem": "sock", 00:17:38.728 "config": [ 00:17:38.728 { 00:17:38.728 "method": "sock_set_default_impl", 00:17:38.728 "params": { 00:17:38.728 "impl_name": "uring" 00:17:38.728 } 00:17:38.728 }, 00:17:38.728 { 00:17:38.728 "method": "sock_impl_set_options", 00:17:38.728 "params": { 00:17:38.728 "impl_name": "ssl", 00:17:38.728 "recv_buf_size": 4096, 00:17:38.728 "send_buf_size": 4096, 00:17:38.728 "enable_recv_pipe": true, 00:17:38.728 "enable_quickack": false, 00:17:38.728 "enable_placement_id": 0, 00:17:38.728 "enable_zerocopy_send_server": true, 00:17:38.728 "enable_zerocopy_send_client": false, 00:17:38.728 "zerocopy_threshold": 0, 00:17:38.728 "tls_version": 0, 00:17:38.728 "enable_ktls": false 00:17:38.728 } 00:17:38.728 }, 00:17:38.728 { 00:17:38.728 "method": "sock_impl_set_options", 00:17:38.728 "params": { 00:17:38.728 "impl_name": "posix", 00:17:38.728 "recv_buf_size": 2097152, 00:17:38.728 "send_buf_size": 2097152, 00:17:38.728 "enable_recv_pipe": true, 00:17:38.728 "enable_quickack": false, 00:17:38.728 "enable_placement_id": 0, 00:17:38.728 "enable_zerocopy_send_server": true, 00:17:38.728 "enable_zerocopy_send_client": false, 00:17:38.728 "zerocopy_threshold": 0, 00:17:38.728 "tls_version": 0, 00:17:38.728 "enable_ktls": false 00:17:38.728 } 00:17:38.728 }, 00:17:38.728 { 00:17:38.728 "method": "sock_impl_set_options", 00:17:38.728 "params": { 00:17:38.728 "impl_name": "uring", 00:17:38.728 "recv_buf_size": 2097152, 00:17:38.728 "send_buf_size": 2097152, 00:17:38.728 "enable_recv_pipe": true, 00:17:38.728 "enable_quickack": false, 00:17:38.728 "enable_placement_id": 0, 00:17:38.728 "enable_zerocopy_send_server": false, 00:17:38.728 "enable_zerocopy_send_client": false, 00:17:38.728 "zerocopy_threshold": 0, 00:17:38.728 "tls_version": 0, 00:17:38.728 "enable_ktls": false 00:17:38.728 } 00:17:38.728 } 00:17:38.728 ] 00:17:38.728 }, 00:17:38.728 { 00:17:38.728 "subsystem": "vmd", 00:17:38.728 "config": [] 00:17:38.728 }, 00:17:38.728 { 00:17:38.728 "subsystem": "accel", 00:17:38.728 "config": [ 00:17:38.728 { 00:17:38.728 "method": "accel_set_options", 00:17:38.728 "params": { 00:17:38.728 "small_cache_size": 128, 00:17:38.728 "large_cache_size": 16, 00:17:38.728 "task_count": 2048, 00:17:38.728 "sequence_count": 2048, 00:17:38.728 "buf_count": 2048 00:17:38.728 } 00:17:38.728 } 00:17:38.728 ] 00:17:38.728 }, 00:17:38.728 { 00:17:38.728 "subsystem": "bdev", 00:17:38.728 "config": [ 00:17:38.728 { 00:17:38.728 "method": "bdev_set_options", 00:17:38.728 "params": { 00:17:38.728 "bdev_io_pool_size": 65535, 00:17:38.728 "bdev_io_cache_size": 256, 00:17:38.728 "bdev_auto_examine": true, 00:17:38.728 "iobuf_small_cache_size": 128, 00:17:38.728 "iobuf_large_cache_size": 16 00:17:38.728 } 00:17:38.728 }, 00:17:38.728 { 00:17:38.728 "method": "bdev_raid_set_options", 00:17:38.728 "params": { 00:17:38.728 "process_window_size_kb": 1024, 00:17:38.728 "process_max_bandwidth_mb_sec": 0 00:17:38.728 } 00:17:38.728 }, 00:17:38.728 { 00:17:38.728 "method": "bdev_iscsi_set_options", 00:17:38.728 "params": { 00:17:38.728 "timeout_sec": 30 00:17:38.728 } 00:17:38.728 }, 00:17:38.728 { 00:17:38.728 "method": "bdev_nvme_set_options", 00:17:38.728 "params": { 00:17:38.728 "action_on_timeout": "none", 00:17:38.728 "timeout_us": 0, 00:17:38.728 "timeout_admin_us": 0, 00:17:38.728 "keep_alive_timeout_ms": 10000, 00:17:38.728 "arbitration_burst": 0, 00:17:38.728 "low_priority_weight": 0, 00:17:38.728 "medium_priority_weight": 0, 00:17:38.728 "high_priority_weight": 0, 00:17:38.728 "nvme_adminq_poll_period_us": 10000, 00:17:38.728 "nvme_ioq_poll_period_us": 0, 00:17:38.728 "io_queue_requests": 512, 00:17:38.728 "delay_cmd_submit": true, 00:17:38.728 "transport_retry_count": 4, 00:17:38.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.728 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.728 "bdev_retry_count": 3, 00:17:38.728 "transport_ack_timeout": 0, 00:17:38.728 "ctrlr_loss_timeout_sec": 0, 00:17:38.728 "reconnect_delay_sec": 0, 00:17:38.728 "fast_io_fail_timeout_sec": 0, 00:17:38.728 "disable_auto_failback": false, 00:17:38.728 "generate_uuids": false, 00:17:38.728 "transport_tos": 0, 00:17:38.728 "nvme_error_stat": false, 00:17:38.728 "rdma_srq_size": 0, 00:17:38.728 "io_path_stat": false, 00:17:38.728 "allow_accel_sequence": false, 00:17:38.728 "rdma_max_cq_size": 0, 00:17:38.728 "rdma_cm_event_timeout_ms": 0, 00:17:38.728 "dhchap_digests": [ 00:17:38.728 "sha256", 00:17:38.728 "sha384", 00:17:38.728 "sha512" 00:17:38.728 ], 00:17:38.728 "dhchap_dhgroups": [ 00:17:38.728 "null", 00:17:38.728 "ffdhe2048", 00:17:38.728 "ffdhe3072", 00:17:38.728 "ffdhe4096", 00:17:38.728 "ffdhe6144", 00:17:38.728 "ffdhe8192" 00:17:38.728 ] 00:17:38.728 } 00:17:38.728 }, 00:17:38.728 { 00:17:38.728 "method": "bdev_nvme_attach_controller", 00:17:38.728 "params": { 00:17:38.728 "name": "TLSTEST", 00:17:38.728 "trtype": "TCP", 00:17:38.728 "adrfam": "IPv4", 00:17:38.728 "traddr": "10.0.0.3", 00:17:38.728 "trsvcid": "4420", 00:17:38.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.728 "prchk_reftag": false, 00:17:38.729 "prchk_guard": false, 00:17:38.729 "ctrlr_loss_timeout_sec": 0, 00:17:38.729 "reconnect_delay_sec": 0, 00:17:38.729 "fast_io_fail_timeout_sec": 0, 00:17:38.729 "psk": "key0", 00:17:38.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.729 "hdgst": false, 00:17:38.729 "ddgst": false, 00:17:38.729 "multipath": "multipath" 00:17:38.729 } 00:17:38.729 }, 00:17:38.729 { 00:17:38.729 "method": "bdev_nvme_set_hotplug", 00:17:38.729 "params": { 00:17:38.729 "period_us": 100000, 00:17:38.729 "enable": false 00:17:38.729 } 00:17:38.729 }, 00:17:38.729 { 00:17:38.729 "method": "bdev_wait_for_examine" 00:17:38.729 } 00:17:38.729 ] 00:17:38.729 }, 00:17:38.729 { 00:17:38.729 "subsystem": "nbd", 00:17:38.729 "config": [] 00:17:38.729 } 00:17:38.729 ] 00:17:38.729 }' 00:17:38.988 [2024-11-17 08:17:43.820382] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:38.988 [2024-11-17 08:17:43.820724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75290 ] 00:17:39.246 [2024-11-17 08:17:44.006863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.246 [2024-11-17 08:17:44.126948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.505 [2024-11-17 08:17:44.445962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:39.763 [2024-11-17 08:17:44.596969] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:39.763 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.763 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:39.763 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:40.022 Running I/O for 10 seconds... 00:17:41.891 4581.00 IOPS, 17.89 MiB/s [2024-11-17T08:17:47.838Z] 4620.00 IOPS, 18.05 MiB/s [2024-11-17T08:17:49.213Z] 4637.67 IOPS, 18.12 MiB/s [2024-11-17T08:17:50.146Z] 4633.75 IOPS, 18.10 MiB/s [2024-11-17T08:17:51.079Z] 4624.60 IOPS, 18.06 MiB/s [2024-11-17T08:17:52.013Z] 4614.83 IOPS, 18.03 MiB/s [2024-11-17T08:17:52.950Z] 4608.86 IOPS, 18.00 MiB/s [2024-11-17T08:17:53.884Z] 4605.25 IOPS, 17.99 MiB/s [2024-11-17T08:17:54.819Z] 4603.67 IOPS, 17.98 MiB/s [2024-11-17T08:17:55.078Z] 4602.80 IOPS, 17.98 MiB/s 00:17:50.066 Latency(us) 00:17:50.066 [2024-11-17T08:17:55.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.066 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:50.066 Verification LBA range: start 0x0 length 0x2000 00:17:50.066 TLSTESTn1 : 10.02 4607.64 18.00 0.00 0.00 27735.68 6211.44 22950.76 00:17:50.066 [2024-11-17T08:17:55.078Z] =================================================================================================================== 00:17:50.066 [2024-11-17T08:17:55.078Z] Total : 4607.64 18.00 0.00 0.00 27735.68 6211.44 22950.76 00:17:50.066 { 00:17:50.066 "results": [ 00:17:50.066 { 00:17:50.066 "job": "TLSTESTn1", 00:17:50.066 "core_mask": "0x4", 00:17:50.066 "workload": "verify", 00:17:50.066 "status": "finished", 00:17:50.066 "verify_range": { 00:17:50.066 "start": 0, 00:17:50.066 "length": 8192 00:17:50.066 }, 00:17:50.066 "queue_depth": 128, 00:17:50.066 "io_size": 4096, 00:17:50.066 "runtime": 10.016843, 00:17:50.066 "iops": 4607.639353037679, 00:17:50.066 "mibps": 17.998591222803434, 00:17:50.066 "io_failed": 0, 00:17:50.066 "io_timeout": 0, 00:17:50.066 "avg_latency_us": 27735.675034636097, 00:17:50.066 "min_latency_us": 6211.444176706827, 00:17:50.066 "max_latency_us": 22950.75983935743 00:17:50.066 } 00:17:50.066 ], 00:17:50.066 "core_count": 1 00:17:50.066 } 00:17:50.066 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:50.066 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 75290 00:17:50.066 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75290 ']' 00:17:50.066 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75290 00:17:50.066 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:50.066 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.066 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75290 00:17:50.066 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:50.066 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:50.066 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75290' 00:17:50.066 killing process with pid 75290 00:17:50.066 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.066 00:17:50.066 Latency(us) 00:17:50.066 [2024-11-17T08:17:55.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.066 [2024-11-17T08:17:55.078Z] =================================================================================================================== 00:17:50.066 [2024-11-17T08:17:55.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.067 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75290 00:17:50.067 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75290 00:17:51.444 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 75258 00:17:51.444 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75258 ']' 00:17:51.444 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75258 00:17:51.444 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:51.444 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.444 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75258 00:17:51.444 killing process with pid 75258 00:17:51.444 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:51.444 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:51.444 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75258' 00:17:51.444 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75258 00:17:51.444 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75258 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75447 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75447 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75447 ']' 00:17:52.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.383 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.643 [2024-11-17 08:17:57.485108] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:52.643 [2024-11-17 08:17:57.485258] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.903 [2024-11-17 08:17:57.671929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.903 [2024-11-17 08:17:57.805196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.903 [2024-11-17 08:17:57.805256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.903 [2024-11-17 08:17:57.805273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.903 [2024-11-17 08:17:57.805294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.903 [2024-11-17 08:17:57.805307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.903 [2024-11-17 08:17:57.806737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.163 [2024-11-17 08:17:58.048783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:53.422 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.422 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:53.422 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:53.422 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.422 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.422 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.422 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.qQ8tnfd2jw 00:17:53.422 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qQ8tnfd2jw 00:17:53.422 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:53.682 [2024-11-17 08:17:58.568417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.682 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:53.940 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:54.200 [2024-11-17 08:17:58.959890] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:54.200 [2024-11-17 08:17:58.960427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:54.200 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:54.459 malloc0 00:17:54.459 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:54.718 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qQ8tnfd2jw 00:17:54.718 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:54.977 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:54.977 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=75503 00:17:54.977 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.977 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 75503 /var/tmp/bdevperf.sock 00:17:54.977 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75503 ']' 00:17:54.977 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.977 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.977 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.977 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.977 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.977 [2024-11-17 08:17:59.977427] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:54.977 [2024-11-17 08:17:59.977550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75503 ] 00:17:55.236 [2024-11-17 08:18:00.160290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.495 [2024-11-17 08:18:00.265256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.495 [2024-11-17 08:18:00.458667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:55.755 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.755 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:55.755 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qQ8tnfd2jw 00:17:56.014 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:56.273 [2024-11-17 08:18:01.150762] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.273 nvme0n1 00:17:56.273 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:56.531 Running I/O for 1 seconds... 00:17:57.469 4484.00 IOPS, 17.52 MiB/s 00:17:57.469 Latency(us) 00:17:57.469 [2024-11-17T08:18:02.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.469 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:57.469 Verification LBA range: start 0x0 length 0x2000 00:17:57.469 nvme0n1 : 1.02 4536.38 17.72 0.00 0.00 28010.29 6500.96 36215.88 00:17:57.469 [2024-11-17T08:18:02.481Z] =================================================================================================================== 00:17:57.469 [2024-11-17T08:18:02.481Z] Total : 4536.38 17.72 0.00 0.00 28010.29 6500.96 36215.88 00:17:57.469 { 00:17:57.469 "results": [ 00:17:57.469 { 00:17:57.469 "job": "nvme0n1", 00:17:57.469 "core_mask": "0x2", 00:17:57.469 "workload": "verify", 00:17:57.469 "status": "finished", 00:17:57.469 "verify_range": { 00:17:57.469 "start": 0, 00:17:57.469 "length": 8192 00:17:57.469 }, 00:17:57.469 "queue_depth": 128, 00:17:57.469 "io_size": 4096, 00:17:57.469 "runtime": 1.016669, 00:17:57.469 "iops": 4536.3830312520595, 00:17:57.469 "mibps": 17.720246215828357, 00:17:57.469 "io_failed": 0, 00:17:57.469 "io_timeout": 0, 00:17:57.469 "avg_latency_us": 28010.29006363703, 00:17:57.469 "min_latency_us": 6500.960642570281, 00:17:57.469 "max_latency_us": 36215.87791164659 00:17:57.469 } 00:17:57.469 ], 00:17:57.469 "core_count": 1 00:17:57.469 } 00:17:57.469 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 75503 00:17:57.469 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75503 ']' 00:17:57.469 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75503 00:17:57.469 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:57.469 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.469 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75503 00:17:57.469 killing process with pid 75503 00:17:57.469 Received shutdown signal, test time was about 1.000000 seconds 00:17:57.469 00:17:57.469 Latency(us) 00:17:57.469 [2024-11-17T08:18:02.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.469 [2024-11-17T08:18:02.481Z] =================================================================================================================== 00:17:57.469 [2024-11-17T08:18:02.481Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.469 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:57.469 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:57.469 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75503' 00:17:57.469 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75503 00:17:57.469 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75503 00:17:58.848 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 75447 00:17:58.848 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75447 ']' 00:17:58.848 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75447 00:17:58.848 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:58.848 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.848 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75447 00:17:58.848 killing process with pid 75447 00:17:58.848 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.848 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.848 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75447' 00:17:58.848 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75447 00:17:58.848 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75447 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75578 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75578 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75578 ']' 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.298 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.298 [2024-11-17 08:18:05.050342] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:00.298 [2024-11-17 08:18:05.050502] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.298 [2024-11-17 08:18:05.237813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.558 [2024-11-17 08:18:05.370832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.558 [2024-11-17 08:18:05.371141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.558 [2024-11-17 08:18:05.371173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.558 [2024-11-17 08:18:05.371196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.558 [2024-11-17 08:18:05.371213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.558 [2024-11-17 08:18:05.373033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.818 [2024-11-17 08:18:05.613977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:01.077 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.077 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:01.077 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.077 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.077 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.077 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.077 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:01.077 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.077 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.077 [2024-11-17 08:18:05.915203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.077 malloc0 00:18:01.077 [2024-11-17 08:18:05.981365] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.077 [2024-11-17 08:18:05.981720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:01.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.077 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.077 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=75609 00:18:01.077 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 75609 /var/tmp/bdevperf.sock 00:18:01.077 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75609 ']' 00:18:01.077 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.077 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.077 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.077 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.077 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.077 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:01.336 [2024-11-17 08:18:06.148806] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:01.336 [2024-11-17 08:18:06.148996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75609 ] 00:18:01.336 [2024-11-17 08:18:06.345526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.595 [2024-11-17 08:18:06.481104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.854 [2024-11-17 08:18:06.726949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.113 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.113 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:02.113 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qQ8tnfd2jw 00:18:02.372 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:02.633 [2024-11-17 08:18:07.396707] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.633 nvme0n1 00:18:02.633 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.633 Running I/O for 1 seconds... 00:18:04.012 4240.00 IOPS, 16.56 MiB/s 00:18:04.012 Latency(us) 00:18:04.012 [2024-11-17T08:18:09.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.012 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:04.012 Verification LBA range: start 0x0 length 0x2000 00:18:04.012 nvme0n1 : 1.02 4301.16 16.80 0.00 0.00 29559.05 4263.79 23792.99 00:18:04.012 [2024-11-17T08:18:09.024Z] =================================================================================================================== 00:18:04.012 [2024-11-17T08:18:09.024Z] Total : 4301.16 16.80 0.00 0.00 29559.05 4263.79 23792.99 00:18:04.012 { 00:18:04.012 "results": [ 00:18:04.012 { 00:18:04.012 "job": "nvme0n1", 00:18:04.013 "core_mask": "0x2", 00:18:04.013 "workload": "verify", 00:18:04.013 "status": "finished", 00:18:04.013 "verify_range": { 00:18:04.013 "start": 0, 00:18:04.013 "length": 8192 00:18:04.013 }, 00:18:04.013 "queue_depth": 128, 00:18:04.013 "io_size": 4096, 00:18:04.013 "runtime": 1.015539, 00:18:04.013 "iops": 4301.164209350896, 00:18:04.013 "mibps": 16.80142269277694, 00:18:04.013 "io_failed": 0, 00:18:04.013 "io_timeout": 0, 00:18:04.013 "avg_latency_us": 29559.047889727408, 00:18:04.013 "min_latency_us": 4263.787951807229, 00:18:04.013 "max_latency_us": 23792.98955823293 00:18:04.013 } 00:18:04.013 ], 00:18:04.013 "core_count": 1 00:18:04.013 } 00:18:04.013 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:04.013 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.013 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.013 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.013 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:04.013 "subsystems": [ 00:18:04.013 { 00:18:04.013 "subsystem": "keyring", 00:18:04.013 "config": [ 00:18:04.013 { 00:18:04.013 "method": "keyring_file_add_key", 00:18:04.013 "params": { 00:18:04.013 "name": "key0", 00:18:04.013 "path": "/tmp/tmp.qQ8tnfd2jw" 00:18:04.013 } 00:18:04.013 } 00:18:04.013 ] 00:18:04.013 }, 00:18:04.013 { 00:18:04.013 "subsystem": "iobuf", 00:18:04.013 "config": [ 00:18:04.013 { 00:18:04.013 "method": "iobuf_set_options", 00:18:04.013 "params": { 00:18:04.013 "small_pool_count": 8192, 00:18:04.013 "large_pool_count": 1024, 00:18:04.013 "small_bufsize": 8192, 00:18:04.013 "large_bufsize": 135168, 00:18:04.013 "enable_numa": false 00:18:04.013 } 00:18:04.013 } 00:18:04.013 ] 00:18:04.013 }, 00:18:04.013 { 00:18:04.013 "subsystem": "sock", 00:18:04.013 "config": [ 00:18:04.013 { 00:18:04.013 "method": "sock_set_default_impl", 00:18:04.013 "params": { 00:18:04.013 "impl_name": "uring" 00:18:04.013 } 00:18:04.013 }, 00:18:04.013 { 00:18:04.013 "method": "sock_impl_set_options", 00:18:04.013 "params": { 00:18:04.013 "impl_name": "ssl", 00:18:04.013 "recv_buf_size": 4096, 00:18:04.013 "send_buf_size": 4096, 00:18:04.013 "enable_recv_pipe": true, 00:18:04.013 "enable_quickack": false, 00:18:04.013 "enable_placement_id": 0, 00:18:04.013 "enable_zerocopy_send_server": true, 00:18:04.013 "enable_zerocopy_send_client": false, 00:18:04.013 "zerocopy_threshold": 0, 00:18:04.013 "tls_version": 0, 00:18:04.013 "enable_ktls": false 00:18:04.013 } 00:18:04.013 }, 00:18:04.013 { 00:18:04.013 "method": "sock_impl_set_options", 00:18:04.013 "params": { 00:18:04.013 "impl_name": "posix", 00:18:04.013 "recv_buf_size": 2097152, 00:18:04.013 "send_buf_size": 2097152, 00:18:04.013 "enable_recv_pipe": true, 00:18:04.013 "enable_quickack": false, 00:18:04.013 "enable_placement_id": 0, 00:18:04.013 "enable_zerocopy_send_server": true, 00:18:04.013 "enable_zerocopy_send_client": false, 00:18:04.013 "zerocopy_threshold": 0, 00:18:04.013 "tls_version": 0, 00:18:04.013 "enable_ktls": false 00:18:04.013 } 00:18:04.013 }, 00:18:04.013 { 00:18:04.013 "method": "sock_impl_set_options", 00:18:04.013 "params": { 00:18:04.013 "impl_name": "uring", 00:18:04.013 "recv_buf_size": 2097152, 00:18:04.013 "send_buf_size": 2097152, 00:18:04.013 "enable_recv_pipe": true, 00:18:04.013 "enable_quickack": false, 00:18:04.013 "enable_placement_id": 0, 00:18:04.013 "enable_zerocopy_send_server": false, 00:18:04.013 "enable_zerocopy_send_client": false, 00:18:04.013 "zerocopy_threshold": 0, 00:18:04.013 "tls_version": 0, 00:18:04.013 "enable_ktls": false 00:18:04.013 } 00:18:04.013 } 00:18:04.013 ] 00:18:04.013 }, 00:18:04.013 { 00:18:04.013 "subsystem": "vmd", 00:18:04.013 "config": [] 00:18:04.013 }, 00:18:04.013 { 00:18:04.013 "subsystem": "accel", 00:18:04.013 "config": [ 00:18:04.013 { 00:18:04.013 "method": "accel_set_options", 00:18:04.013 "params": { 00:18:04.013 "small_cache_size": 128, 00:18:04.013 "large_cache_size": 16, 00:18:04.013 "task_count": 2048, 00:18:04.013 "sequence_count": 2048, 00:18:04.013 "buf_count": 2048 00:18:04.013 } 00:18:04.013 } 00:18:04.013 ] 00:18:04.013 }, 00:18:04.013 { 00:18:04.013 "subsystem": "bdev", 00:18:04.013 "config": [ 00:18:04.013 { 00:18:04.013 "method": "bdev_set_options", 00:18:04.013 "params": { 00:18:04.013 "bdev_io_pool_size": 65535, 00:18:04.013 "bdev_io_cache_size": 256, 00:18:04.013 "bdev_auto_examine": true, 00:18:04.013 "iobuf_small_cache_size": 128, 00:18:04.013 "iobuf_large_cache_size": 16 00:18:04.013 } 00:18:04.013 }, 00:18:04.013 { 00:18:04.013 "method": "bdev_raid_set_options", 00:18:04.013 "params": { 00:18:04.013 "process_window_size_kb": 1024, 00:18:04.013 "process_max_bandwidth_mb_sec": 0 00:18:04.013 } 00:18:04.013 }, 00:18:04.013 { 00:18:04.013 "method": "bdev_iscsi_set_options", 00:18:04.014 "params": { 00:18:04.014 "timeout_sec": 30 00:18:04.014 } 00:18:04.014 }, 00:18:04.014 { 00:18:04.014 "method": "bdev_nvme_set_options", 00:18:04.014 "params": { 00:18:04.014 "action_on_timeout": "none", 00:18:04.014 "timeout_us": 0, 00:18:04.014 "timeout_admin_us": 0, 00:18:04.014 "keep_alive_timeout_ms": 10000, 00:18:04.014 "arbitration_burst": 0, 00:18:04.014 "low_priority_weight": 0, 00:18:04.014 "medium_priority_weight": 0, 00:18:04.014 "high_priority_weight": 0, 00:18:04.014 "nvme_adminq_poll_period_us": 10000, 00:18:04.014 "nvme_ioq_poll_period_us": 0, 00:18:04.014 "io_queue_requests": 0, 00:18:04.014 "delay_cmd_submit": true, 00:18:04.014 "transport_retry_count": 4, 00:18:04.014 "bdev_retry_count": 3, 00:18:04.014 "transport_ack_timeout": 0, 00:18:04.014 "ctrlr_loss_timeout_sec": 0, 00:18:04.014 "reconnect_delay_sec": 0, 00:18:04.014 "fast_io_fail_timeout_sec": 0, 00:18:04.014 "disable_auto_failback": false, 00:18:04.014 "generate_uuids": false, 00:18:04.014 "transport_tos": 0, 00:18:04.014 "nvme_error_stat": false, 00:18:04.014 "rdma_srq_size": 0, 00:18:04.014 "io_path_stat": false, 00:18:04.014 "allow_accel_sequence": false, 00:18:04.014 "rdma_max_cq_size": 0, 00:18:04.014 "rdma_cm_event_timeout_ms": 0, 00:18:04.014 "dhchap_digests": [ 00:18:04.014 "sha256", 00:18:04.014 "sha384", 00:18:04.014 "sha512" 00:18:04.014 ], 00:18:04.014 "dhchap_dhgroups": [ 00:18:04.014 "null", 00:18:04.014 "ffdhe2048", 00:18:04.014 "ffdhe3072", 00:18:04.014 "ffdhe4096", 00:18:04.014 "ffdhe6144", 00:18:04.014 "ffdhe8192" 00:18:04.014 ] 00:18:04.014 } 00:18:04.014 }, 00:18:04.014 { 00:18:04.014 "method": "bdev_nvme_set_hotplug", 00:18:04.014 "params": { 00:18:04.014 "period_us": 100000, 00:18:04.014 "enable": false 00:18:04.014 } 00:18:04.014 }, 00:18:04.014 { 00:18:04.014 "method": "bdev_malloc_create", 00:18:04.014 "params": { 00:18:04.014 "name": "malloc0", 00:18:04.014 "num_blocks": 8192, 00:18:04.014 "block_size": 4096, 00:18:04.014 "physical_block_size": 4096, 00:18:04.014 "uuid": "f9884f4c-4c2e-496b-9465-a1541068be67", 00:18:04.014 "optimal_io_boundary": 0, 00:18:04.014 "md_size": 0, 00:18:04.014 "dif_type": 0, 00:18:04.014 "dif_is_head_of_md": false, 00:18:04.014 "dif_pi_format": 0 00:18:04.014 } 00:18:04.014 }, 00:18:04.014 { 00:18:04.014 "method": "bdev_wait_for_examine" 00:18:04.014 } 00:18:04.014 ] 00:18:04.014 }, 00:18:04.014 { 00:18:04.014 "subsystem": "nbd", 00:18:04.014 "config": [] 00:18:04.014 }, 00:18:04.014 { 00:18:04.014 "subsystem": "scheduler", 00:18:04.014 "config": [ 00:18:04.014 { 00:18:04.014 "method": "framework_set_scheduler", 00:18:04.014 "params": { 00:18:04.014 "name": "static" 00:18:04.014 } 00:18:04.014 } 00:18:04.014 ] 00:18:04.014 }, 00:18:04.014 { 00:18:04.014 "subsystem": "nvmf", 00:18:04.014 "config": [ 00:18:04.014 { 00:18:04.014 "method": "nvmf_set_config", 00:18:04.014 "params": { 00:18:04.014 "discovery_filter": "match_any", 00:18:04.014 "admin_cmd_passthru": { 00:18:04.014 "identify_ctrlr": false 00:18:04.014 }, 00:18:04.014 "dhchap_digests": [ 00:18:04.014 "sha256", 00:18:04.014 "sha384", 00:18:04.014 "sha512" 00:18:04.014 ], 00:18:04.014 "dhchap_dhgroups": [ 00:18:04.014 "null", 00:18:04.014 "ffdhe2048", 00:18:04.014 "ffdhe3072", 00:18:04.014 "ffdhe4096", 00:18:04.014 "ffdhe6144", 00:18:04.014 "ffdhe8192" 00:18:04.014 ] 00:18:04.014 } 00:18:04.014 }, 00:18:04.014 { 00:18:04.014 "method": "nvmf_set_max_subsystems", 00:18:04.014 "params": { 00:18:04.014 "max_subsystems": 1024 00:18:04.014 } 00:18:04.014 }, 00:18:04.014 { 00:18:04.014 "method": "nvmf_set_crdt", 00:18:04.014 "params": { 00:18:04.014 "crdt1": 0, 00:18:04.014 "crdt2": 0, 00:18:04.014 "crdt3": 0 00:18:04.014 } 00:18:04.014 }, 00:18:04.014 { 00:18:04.014 "method": "nvmf_create_transport", 00:18:04.014 "params": { 00:18:04.014 "trtype": "TCP", 00:18:04.014 "max_queue_depth": 128, 00:18:04.015 "max_io_qpairs_per_ctrlr": 127, 00:18:04.015 "in_capsule_data_size": 4096, 00:18:04.015 "max_io_size": 131072, 00:18:04.015 "io_unit_size": 131072, 00:18:04.015 "max_aq_depth": 128, 00:18:04.015 "num_shared_buffers": 511, 00:18:04.015 "buf_cache_size": 4294967295, 00:18:04.015 "dif_insert_or_strip": false, 00:18:04.015 "zcopy": false, 00:18:04.015 "c2h_success": false, 00:18:04.015 "sock_priority": 0, 00:18:04.015 "abort_timeout_sec": 1, 00:18:04.015 "ack_timeout": 0, 00:18:04.015 "data_wr_pool_size": 0 00:18:04.015 } 00:18:04.015 }, 00:18:04.015 { 00:18:04.015 "method": "nvmf_create_subsystem", 00:18:04.015 "params": { 00:18:04.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.015 "allow_any_host": false, 00:18:04.015 "serial_number": "00000000000000000000", 00:18:04.015 "model_number": "SPDK bdev Controller", 00:18:04.015 "max_namespaces": 32, 00:18:04.015 "min_cntlid": 1, 00:18:04.015 "max_cntlid": 65519, 00:18:04.015 "ana_reporting": false 00:18:04.015 } 00:18:04.015 }, 00:18:04.015 { 00:18:04.015 "method": "nvmf_subsystem_add_host", 00:18:04.015 "params": { 00:18:04.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.015 "host": "nqn.2016-06.io.spdk:host1", 00:18:04.015 "psk": "key0" 00:18:04.015 } 00:18:04.015 }, 00:18:04.015 { 00:18:04.015 "method": "nvmf_subsystem_add_ns", 00:18:04.015 "params": { 00:18:04.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.015 "namespace": { 00:18:04.015 "nsid": 1, 00:18:04.015 "bdev_name": "malloc0", 00:18:04.015 "nguid": "F9884F4C4C2E496B9465A1541068BE67", 00:18:04.015 "uuid": "f9884f4c-4c2e-496b-9465-a1541068be67", 00:18:04.015 "no_auto_visible": false 00:18:04.015 } 00:18:04.015 } 00:18:04.015 }, 00:18:04.015 { 00:18:04.015 "method": "nvmf_subsystem_add_listener", 00:18:04.015 "params": { 00:18:04.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.015 "listen_address": { 00:18:04.015 "trtype": "TCP", 00:18:04.015 "adrfam": "IPv4", 00:18:04.015 "traddr": "10.0.0.3", 00:18:04.015 "trsvcid": "4420" 00:18:04.015 }, 00:18:04.015 "secure_channel": false, 00:18:04.015 "sock_impl": "ssl" 00:18:04.015 } 00:18:04.015 } 00:18:04.015 ] 00:18:04.015 } 00:18:04.015 ] 00:18:04.015 }' 00:18:04.015 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:04.275 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:04.276 "subsystems": [ 00:18:04.276 { 00:18:04.276 "subsystem": "keyring", 00:18:04.276 "config": [ 00:18:04.276 { 00:18:04.276 "method": "keyring_file_add_key", 00:18:04.276 "params": { 00:18:04.276 "name": "key0", 00:18:04.276 "path": "/tmp/tmp.qQ8tnfd2jw" 00:18:04.276 } 00:18:04.276 } 00:18:04.276 ] 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "subsystem": "iobuf", 00:18:04.276 "config": [ 00:18:04.276 { 00:18:04.276 "method": "iobuf_set_options", 00:18:04.276 "params": { 00:18:04.276 "small_pool_count": 8192, 00:18:04.276 "large_pool_count": 1024, 00:18:04.276 "small_bufsize": 8192, 00:18:04.276 "large_bufsize": 135168, 00:18:04.276 "enable_numa": false 00:18:04.276 } 00:18:04.276 } 00:18:04.276 ] 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "subsystem": "sock", 00:18:04.276 "config": [ 00:18:04.276 { 00:18:04.276 "method": "sock_set_default_impl", 00:18:04.276 "params": { 00:18:04.276 "impl_name": "uring" 00:18:04.276 } 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "method": "sock_impl_set_options", 00:18:04.276 "params": { 00:18:04.276 "impl_name": "ssl", 00:18:04.276 "recv_buf_size": 4096, 00:18:04.276 "send_buf_size": 4096, 00:18:04.276 "enable_recv_pipe": true, 00:18:04.276 "enable_quickack": false, 00:18:04.276 "enable_placement_id": 0, 00:18:04.276 "enable_zerocopy_send_server": true, 00:18:04.276 "enable_zerocopy_send_client": false, 00:18:04.276 "zerocopy_threshold": 0, 00:18:04.276 "tls_version": 0, 00:18:04.276 "enable_ktls": false 00:18:04.276 } 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "method": "sock_impl_set_options", 00:18:04.276 "params": { 00:18:04.276 "impl_name": "posix", 00:18:04.276 "recv_buf_size": 2097152, 00:18:04.276 "send_buf_size": 2097152, 00:18:04.276 "enable_recv_pipe": true, 00:18:04.276 "enable_quickack": false, 00:18:04.276 "enable_placement_id": 0, 00:18:04.276 "enable_zerocopy_send_server": true, 00:18:04.276 "enable_zerocopy_send_client": false, 00:18:04.276 "zerocopy_threshold": 0, 00:18:04.276 "tls_version": 0, 00:18:04.276 "enable_ktls": false 00:18:04.276 } 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "method": "sock_impl_set_options", 00:18:04.276 "params": { 00:18:04.276 "impl_name": "uring", 00:18:04.276 "recv_buf_size": 2097152, 00:18:04.276 "send_buf_size": 2097152, 00:18:04.276 "enable_recv_pipe": true, 00:18:04.276 "enable_quickack": false, 00:18:04.276 "enable_placement_id": 0, 00:18:04.276 "enable_zerocopy_send_server": false, 00:18:04.276 "enable_zerocopy_send_client": false, 00:18:04.276 "zerocopy_threshold": 0, 00:18:04.276 "tls_version": 0, 00:18:04.276 "enable_ktls": false 00:18:04.276 } 00:18:04.276 } 00:18:04.276 ] 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "subsystem": "vmd", 00:18:04.276 "config": [] 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "subsystem": "accel", 00:18:04.276 "config": [ 00:18:04.276 { 00:18:04.276 "method": "accel_set_options", 00:18:04.276 "params": { 00:18:04.276 "small_cache_size": 128, 00:18:04.276 "large_cache_size": 16, 00:18:04.276 "task_count": 2048, 00:18:04.276 "sequence_count": 2048, 00:18:04.276 "buf_count": 2048 00:18:04.276 } 00:18:04.276 } 00:18:04.276 ] 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "subsystem": "bdev", 00:18:04.276 "config": [ 00:18:04.276 { 00:18:04.276 "method": "bdev_set_options", 00:18:04.276 "params": { 00:18:04.276 "bdev_io_pool_size": 65535, 00:18:04.276 "bdev_io_cache_size": 256, 00:18:04.276 "bdev_auto_examine": true, 00:18:04.276 "iobuf_small_cache_size": 128, 00:18:04.276 "iobuf_large_cache_size": 16 00:18:04.276 } 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "method": "bdev_raid_set_options", 00:18:04.276 "params": { 00:18:04.276 "process_window_size_kb": 1024, 00:18:04.276 "process_max_bandwidth_mb_sec": 0 00:18:04.276 } 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "method": "bdev_iscsi_set_options", 00:18:04.276 "params": { 00:18:04.276 "timeout_sec": 30 00:18:04.276 } 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "method": "bdev_nvme_set_options", 00:18:04.276 "params": { 00:18:04.276 "action_on_timeout": "none", 00:18:04.276 "timeout_us": 0, 00:18:04.276 "timeout_admin_us": 0, 00:18:04.276 "keep_alive_timeout_ms": 10000, 00:18:04.276 "arbitration_burst": 0, 00:18:04.276 "low_priority_weight": 0, 00:18:04.276 "medium_priority_weight": 0, 00:18:04.276 "high_priority_weight": 0, 00:18:04.276 "nvme_adminq_poll_period_us": 10000, 00:18:04.276 "nvme_ioq_poll_period_us": 0, 00:18:04.276 "io_queue_requests": 512, 00:18:04.276 "delay_cmd_submit": true, 00:18:04.276 "transport_retry_count": 4, 00:18:04.276 "bdev_retry_count": 3, 00:18:04.276 "transport_ack_timeout": 0, 00:18:04.276 "ctrlr_loss_timeout_sec": 0, 00:18:04.276 "reconnect_delay_sec": 0, 00:18:04.276 "fast_io_fail_timeout_sec": 0, 00:18:04.276 "disable_auto_failback": false, 00:18:04.276 "generate_uuids": false, 00:18:04.276 "transport_tos": 0, 00:18:04.276 "nvme_error_stat": false, 00:18:04.276 "rdma_srq_size": 0, 00:18:04.276 "io_path_stat": false, 00:18:04.276 "allow_accel_sequence": false, 00:18:04.276 "rdma_max_cq_size": 0, 00:18:04.276 "rdma_cm_event_timeout_ms": 0, 00:18:04.276 "dhchap_digests": [ 00:18:04.276 "sha256", 00:18:04.276 "sha384", 00:18:04.276 "sha512" 00:18:04.276 ], 00:18:04.276 "dhchap_dhgroups": [ 00:18:04.276 "null", 00:18:04.276 "ffdhe2048", 00:18:04.276 "ffdhe3072", 00:18:04.276 "ffdhe4096", 00:18:04.276 "ffdhe6144", 00:18:04.276 "ffdhe8192" 00:18:04.276 ] 00:18:04.276 } 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "method": "bdev_nvme_attach_controller", 00:18:04.276 "params": { 00:18:04.276 "name": "nvme0", 00:18:04.276 "trtype": "TCP", 00:18:04.276 "adrfam": "IPv4", 00:18:04.276 "traddr": "10.0.0.3", 00:18:04.276 "trsvcid": "4420", 00:18:04.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.276 "prchk_reftag": false, 00:18:04.276 "prchk_guard": false, 00:18:04.276 "ctrlr_loss_timeout_sec": 0, 00:18:04.276 "reconnect_delay_sec": 0, 00:18:04.276 "fast_io_fail_timeout_sec": 0, 00:18:04.276 "psk": "key0", 00:18:04.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.276 "hdgst": false, 00:18:04.276 "ddgst": false, 00:18:04.276 "multipath": "multipath" 00:18:04.276 } 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "method": "bdev_nvme_set_hotplug", 00:18:04.276 "params": { 00:18:04.276 "period_us": 100000, 00:18:04.276 "enable": false 00:18:04.276 } 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "method": "bdev_enable_histogram", 00:18:04.276 "params": { 00:18:04.276 "name": "nvme0n1", 00:18:04.276 "enable": true 00:18:04.276 } 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "method": "bdev_wait_for_examine" 00:18:04.276 } 00:18:04.276 ] 00:18:04.276 }, 00:18:04.276 { 00:18:04.276 "subsystem": "nbd", 00:18:04.276 "config": [] 00:18:04.276 } 00:18:04.276 ] 00:18:04.276 }' 00:18:04.276 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 75609 00:18:04.276 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75609 ']' 00:18:04.276 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75609 00:18:04.276 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:04.276 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.276 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75609 00:18:04.276 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:04.276 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:04.276 killing process with pid 75609 00:18:04.276 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75609' 00:18:04.276 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75609 00:18:04.276 Received shutdown signal, test time was about 1.000000 seconds 00:18:04.276 00:18:04.276 Latency(us) 00:18:04.276 [2024-11-17T08:18:09.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.276 [2024-11-17T08:18:09.288Z] =================================================================================================================== 00:18:04.276 [2024-11-17T08:18:09.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.276 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75609 00:18:05.214 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 75578 00:18:05.214 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75578 ']' 00:18:05.214 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75578 00:18:05.214 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:05.214 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.214 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75578 00:18:05.214 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.214 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.214 killing process with pid 75578 00:18:05.214 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75578' 00:18:05.214 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75578 00:18:05.214 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75578 00:18:06.594 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:06.594 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:06.594 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.594 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.594 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:06.594 "subsystems": [ 00:18:06.594 { 00:18:06.594 "subsystem": "keyring", 00:18:06.594 "config": [ 00:18:06.594 { 00:18:06.594 "method": "keyring_file_add_key", 00:18:06.594 "params": { 00:18:06.594 "name": "key0", 00:18:06.594 "path": "/tmp/tmp.qQ8tnfd2jw" 00:18:06.594 } 00:18:06.594 } 00:18:06.594 ] 00:18:06.594 }, 00:18:06.594 { 00:18:06.594 "subsystem": "iobuf", 00:18:06.594 "config": [ 00:18:06.594 { 00:18:06.594 "method": "iobuf_set_options", 00:18:06.594 "params": { 00:18:06.594 "small_pool_count": 8192, 00:18:06.594 "large_pool_count": 1024, 00:18:06.594 "small_bufsize": 8192, 00:18:06.594 "large_bufsize": 135168, 00:18:06.594 "enable_numa": false 00:18:06.594 } 00:18:06.594 } 00:18:06.594 ] 00:18:06.594 }, 00:18:06.594 { 00:18:06.594 "subsystem": "sock", 00:18:06.594 "config": [ 00:18:06.594 { 00:18:06.594 "method": "sock_set_default_impl", 00:18:06.594 "params": { 00:18:06.594 "impl_name": "uring" 00:18:06.594 } 00:18:06.594 }, 00:18:06.594 { 00:18:06.595 "method": "sock_impl_set_options", 00:18:06.595 "params": { 00:18:06.595 "impl_name": "ssl", 00:18:06.595 "recv_buf_size": 4096, 00:18:06.595 "send_buf_size": 4096, 00:18:06.595 "enable_recv_pipe": true, 00:18:06.595 "enable_quickack": false, 00:18:06.595 "enable_placement_id": 0, 00:18:06.595 "enable_zerocopy_send_server": true, 00:18:06.595 "enable_zerocopy_send_client": false, 00:18:06.595 "zerocopy_threshold": 0, 00:18:06.595 "tls_version": 0, 00:18:06.595 "enable_ktls": false 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "sock_impl_set_options", 00:18:06.595 "params": { 00:18:06.595 "impl_name": "posix", 00:18:06.595 "recv_buf_size": 2097152, 00:18:06.595 "send_buf_size": 2097152, 00:18:06.595 "enable_recv_pipe": true, 00:18:06.595 "enable_quickack": false, 00:18:06.595 "enable_placement_id": 0, 00:18:06.595 "enable_zerocopy_send_server": true, 00:18:06.595 "enable_zerocopy_send_client": false, 00:18:06.595 "zerocopy_threshold": 0, 00:18:06.595 "tls_version": 0, 00:18:06.595 "enable_ktls": false 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "sock_impl_set_options", 00:18:06.595 "params": { 00:18:06.595 "impl_name": "uring", 00:18:06.595 "recv_buf_size": 2097152, 00:18:06.595 "send_buf_size": 2097152, 00:18:06.595 "enable_recv_pipe": true, 00:18:06.595 "enable_quickack": false, 00:18:06.595 "enable_placement_id": 0, 00:18:06.595 "enable_zerocopy_send_server": false, 00:18:06.595 "enable_zerocopy_send_client": false, 00:18:06.595 "zerocopy_threshold": 0, 00:18:06.595 "tls_version": 0, 00:18:06.595 "enable_ktls": false 00:18:06.595 } 00:18:06.595 } 00:18:06.595 ] 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "subsystem": "vmd", 00:18:06.595 "config": [] 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "subsystem": "accel", 00:18:06.595 "config": [ 00:18:06.595 { 00:18:06.595 "method": "accel_set_options", 00:18:06.595 "params": { 00:18:06.595 "small_cache_size": 128, 00:18:06.595 "large_cache_size": 16, 00:18:06.595 "task_count": 2048, 00:18:06.595 "sequence_count": 2048, 00:18:06.595 "buf_count": 2048 00:18:06.595 } 00:18:06.595 } 00:18:06.595 ] 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "subsystem": "bdev", 00:18:06.595 "config": [ 00:18:06.595 { 00:18:06.595 "method": "bdev_set_options", 00:18:06.595 "params": { 00:18:06.595 "bdev_io_pool_size": 65535, 00:18:06.595 "bdev_io_cache_size": 256, 00:18:06.595 "bdev_auto_examine": true, 00:18:06.595 "iobuf_small_cache_size": 128, 00:18:06.595 "iobuf_large_cache_size": 16 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "bdev_raid_set_options", 00:18:06.595 "params": { 00:18:06.595 "process_window_size_kb": 1024, 00:18:06.595 "process_max_bandwidth_mb_sec": 0 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "bdev_iscsi_set_options", 00:18:06.595 "params": { 00:18:06.595 "timeout_sec": 30 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "bdev_nvme_set_options", 00:18:06.595 "params": { 00:18:06.595 "action_on_timeout": "none", 00:18:06.595 "timeout_us": 0, 00:18:06.595 "timeout_admin_us": 0, 00:18:06.595 "keep_alive_timeout_ms": 10000, 00:18:06.595 "arbitration_burst": 0, 00:18:06.595 "low_priority_weight": 0, 00:18:06.595 "medium_priority_weight": 0, 00:18:06.595 "high_priority_weight": 0, 00:18:06.595 "nvme_adminq_poll_period_us": 10000, 00:18:06.595 "nvme_ioq_poll_period_us": 0, 00:18:06.595 "io_queue_requests": 0, 00:18:06.595 "delay_cmd_submit": true, 00:18:06.595 "transport_retry_count": 4, 00:18:06.595 "bdev_retry_count": 3, 00:18:06.595 "transport_ack_timeout": 0, 00:18:06.595 "ctrlr_loss_timeout_sec": 0, 00:18:06.595 "reconnect_delay_sec": 0, 00:18:06.595 "fast_io_fail_timeout_sec": 0, 00:18:06.595 "disable_auto_failback": false, 00:18:06.595 "generate_uuids": false, 00:18:06.595 "transport_tos": 0, 00:18:06.595 "nvme_error_stat": false, 00:18:06.595 "rdma_srq_size": 0, 00:18:06.595 "io_path_stat": false, 00:18:06.595 "allow_accel_sequence": false, 00:18:06.595 "rdma_max_cq_size": 0, 00:18:06.595 "rdma_cm_event_timeout_ms": 0, 00:18:06.595 "dhchap_digests": [ 00:18:06.595 "sha256", 00:18:06.595 "sha384", 00:18:06.595 "sha512" 00:18:06.595 ], 00:18:06.595 "dhchap_dhgroups": [ 00:18:06.595 "null", 00:18:06.595 "ffdhe2048", 00:18:06.595 "ffdhe3072", 00:18:06.595 "ffdhe4096", 00:18:06.595 "ffdhe6144", 00:18:06.595 "ffdhe8192" 00:18:06.595 ] 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "bdev_nvme_set_hotplug", 00:18:06.595 "params": { 00:18:06.595 "period_us": 100000, 00:18:06.595 "enable": false 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "bdev_malloc_create", 00:18:06.595 "params": { 00:18:06.595 "name": "malloc0", 00:18:06.595 "num_blocks": 8192, 00:18:06.595 "block_size": 4096, 00:18:06.595 "physical_block_size": 4096, 00:18:06.595 "uuid": "f9884f4c-4c2e-496b-9465-a1541068be67", 00:18:06.595 "optimal_io_boundary": 0, 00:18:06.595 "md_size": 0, 00:18:06.595 "dif_type": 0, 00:18:06.595 "dif_is_head_of_md": false, 00:18:06.595 "dif_pi_format": 0 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "bdev_wait_for_examine" 00:18:06.595 } 00:18:06.595 ] 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "subsystem": "nbd", 00:18:06.595 "config": [] 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "subsystem": "scheduler", 00:18:06.595 "config": [ 00:18:06.595 { 00:18:06.595 "method": "framework_set_scheduler", 00:18:06.595 "params": { 00:18:06.595 "name": "static" 00:18:06.595 } 00:18:06.595 } 00:18:06.595 ] 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "subsystem": "nvmf", 00:18:06.595 "config": [ 00:18:06.595 { 00:18:06.595 "method": "nvmf_set_config", 00:18:06.595 "params": { 00:18:06.595 "discovery_filter": "match_any", 00:18:06.595 "admin_cmd_passthru": { 00:18:06.595 "identify_ctrlr": false 00:18:06.595 }, 00:18:06.595 "dhchap_digests": [ 00:18:06.595 "sha256", 00:18:06.595 "sha384", 00:18:06.595 "sha512" 00:18:06.595 ], 00:18:06.595 "dhchap_dhgroups": [ 00:18:06.595 "null", 00:18:06.595 "ffdhe2048", 00:18:06.595 "ffdhe3072", 00:18:06.595 "ffdhe4096", 00:18:06.595 "ffdhe6144", 00:18:06.595 "ffdhe8192" 00:18:06.595 ] 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "nvmf_set_max_subsystems", 00:18:06.595 "params": { 00:18:06.595 "max_subsystems": 1024 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "nvmf_set_crdt", 00:18:06.595 "params": { 00:18:06.595 "crdt1": 0, 00:18:06.595 "crdt2": 0, 00:18:06.595 "crdt3": 0 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "nvmf_create_transport", 00:18:06.595 "params": { 00:18:06.595 "trtype": "TCP", 00:18:06.595 "max_queue_depth": 128, 00:18:06.595 "max_io_qpairs_per_ctrlr": 127, 00:18:06.595 "in_capsule_data_size": 4096, 00:18:06.595 "max_io_size": 131072, 00:18:06.595 "io_unit_size": 131072, 00:18:06.595 "max_aq_depth": 128, 00:18:06.595 "num_shared_buffers": 511, 00:18:06.595 "buf_cache_size": 4294967295, 00:18:06.595 "dif_insert_or_strip": false, 00:18:06.595 "zcopy": false, 00:18:06.595 "c2h_success": false, 00:18:06.595 "sock_priority": 0, 00:18:06.595 "abort_timeout_sec": 1, 00:18:06.595 "ack_timeout": 0, 00:18:06.595 "data_wr_pool_size": 0 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "nvmf_create_subsystem", 00:18:06.595 "params": { 00:18:06.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.595 "allow_any_host": false, 00:18:06.595 "serial_number": "00000000000000000000", 00:18:06.595 "model_number": "SPDK bdev Controller", 00:18:06.595 "max_namespaces": 32, 00:18:06.595 "min_cntlid": 1, 00:18:06.595 "max_cntlid": 65519, 00:18:06.595 "ana_reporting": false 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "nvmf_subsystem_add_host", 00:18:06.595 "params": { 00:18:06.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.595 "host": "nqn.2016-06.io.spdk:host1", 00:18:06.595 "psk": "key0" 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "nvmf_subsystem_add_ns", 00:18:06.595 "params": { 00:18:06.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.595 "namespace": { 00:18:06.595 "nsid": 1, 00:18:06.595 "bdev_name": "malloc0", 00:18:06.595 "nguid": "F9884F4C4C2E496B9465A1541068BE67", 00:18:06.595 "uuid": "f9884f4c-4c2e-496b-9465-a1541068be67", 00:18:06.595 "no_auto_visible": false 00:18:06.595 } 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "method": "nvmf_subsystem_add_listener", 00:18:06.595 "params": { 00:18:06.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.596 "listen_address": { 00:18:06.596 "trtype": "TCP", 00:18:06.596 "adrfam": "IPv4", 00:18:06.596 "traddr": "10.0.0.3", 00:18:06.596 "trsvcid": "4420" 00:18:06.596 }, 00:18:06.596 "secure_channel": false, 00:18:06.596 "sock_impl": "ssl" 00:18:06.596 } 00:18:06.596 } 00:18:06.596 ] 00:18:06.596 } 00:18:06.596 ] 00:18:06.596 }' 00:18:06.596 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75684 00:18:06.596 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75684 00:18:06.596 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75684 ']' 00:18:06.596 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.596 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:06.596 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.596 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.596 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.596 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.855 [2024-11-17 08:18:11.632836] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:06.855 [2024-11-17 08:18:11.632953] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.855 [2024-11-17 08:18:11.823610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.115 [2024-11-17 08:18:11.953402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.115 [2024-11-17 08:18:11.953469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.115 [2024-11-17 08:18:11.953487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.115 [2024-11-17 08:18:11.953508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.115 [2024-11-17 08:18:11.953522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.115 [2024-11-17 08:18:11.955044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.374 [2024-11-17 08:18:12.316269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:07.634 [2024-11-17 08:18:12.531616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.634 [2024-11-17 08:18:12.563499] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:07.634 [2024-11-17 08:18:12.563780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:07.634 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.634 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:07.634 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:07.634 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:07.634 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.894 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.894 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=75716 00:18:07.894 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 75716 /var/tmp/bdevperf.sock 00:18:07.894 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75716 ']' 00:18:07.894 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.894 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.894 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.894 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.894 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.894 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:07.894 "subsystems": [ 00:18:07.894 { 00:18:07.894 "subsystem": "keyring", 00:18:07.894 "config": [ 00:18:07.894 { 00:18:07.894 "method": "keyring_file_add_key", 00:18:07.894 "params": { 00:18:07.894 "name": "key0", 00:18:07.894 "path": "/tmp/tmp.qQ8tnfd2jw" 00:18:07.894 } 00:18:07.894 } 00:18:07.894 ] 00:18:07.894 }, 00:18:07.894 { 00:18:07.894 "subsystem": "iobuf", 00:18:07.894 "config": [ 00:18:07.894 { 00:18:07.894 "method": "iobuf_set_options", 00:18:07.894 "params": { 00:18:07.894 "small_pool_count": 8192, 00:18:07.894 "large_pool_count": 1024, 00:18:07.894 "small_bufsize": 8192, 00:18:07.894 "large_bufsize": 135168, 00:18:07.894 "enable_numa": false 00:18:07.894 } 00:18:07.894 } 00:18:07.894 ] 00:18:07.894 }, 00:18:07.894 { 00:18:07.894 "subsystem": "sock", 00:18:07.894 "config": [ 00:18:07.894 { 00:18:07.894 "method": "sock_set_default_impl", 00:18:07.894 "params": { 00:18:07.894 "impl_name": "uring" 00:18:07.894 } 00:18:07.894 }, 00:18:07.894 { 00:18:07.894 "method": "sock_impl_set_options", 00:18:07.894 "params": { 00:18:07.894 "impl_name": "ssl", 00:18:07.894 "recv_buf_size": 4096, 00:18:07.894 "send_buf_size": 4096, 00:18:07.894 "enable_recv_pipe": true, 00:18:07.894 "enable_quickack": false, 00:18:07.895 "enable_placement_id": 0, 00:18:07.895 "enable_zerocopy_send_server": true, 00:18:07.895 "enable_zerocopy_send_client": false, 00:18:07.895 "zerocopy_threshold": 0, 00:18:07.895 "tls_version": 0, 00:18:07.895 "enable_ktls": false 00:18:07.895 } 00:18:07.895 }, 00:18:07.895 { 00:18:07.895 "method": "sock_impl_set_options", 00:18:07.895 "params": { 00:18:07.895 "impl_name": "posix", 00:18:07.895 "recv_buf_size": 2097152, 00:18:07.895 "send_buf_size": 2097152, 00:18:07.895 "enable_recv_pipe": true, 00:18:07.895 "enable_quickack": false, 00:18:07.895 "enable_placement_id": 0, 00:18:07.895 "enable_zerocopy_send_server": true, 00:18:07.895 "enable_zerocopy_send_client": false, 00:18:07.895 "zerocopy_threshold": 0, 00:18:07.895 "tls_version": 0, 00:18:07.895 "enable_ktls": false 00:18:07.895 } 00:18:07.895 }, 00:18:07.895 { 00:18:07.895 "method": "sock_impl_set_options", 00:18:07.895 "params": { 00:18:07.895 "impl_name": "uring", 00:18:07.895 "recv_buf_size": 2097152, 00:18:07.895 "send_buf_size": 2097152, 00:18:07.895 "enable_recv_pipe": true, 00:18:07.895 "enable_quickack": false, 00:18:07.895 "enable_placement_id": 0, 00:18:07.895 "enable_zerocopy_send_server": false, 00:18:07.895 "enable_zerocopy_send_client": false, 00:18:07.895 "zerocopy_threshold": 0, 00:18:07.895 "tls_version": 0, 00:18:07.895 "enable_ktls": false 00:18:07.895 } 00:18:07.895 } 00:18:07.895 ] 00:18:07.895 }, 00:18:07.895 { 00:18:07.895 "subsystem": "vmd", 00:18:07.895 "config": [] 00:18:07.895 }, 00:18:07.895 { 00:18:07.895 "subsystem": "accel", 00:18:07.895 "config": [ 00:18:07.895 { 00:18:07.895 "method": "accel_set_options", 00:18:07.895 "params": { 00:18:07.895 "small_cache_size": 128, 00:18:07.895 "large_cache_size": 16, 00:18:07.895 "task_count": 2048, 00:18:07.895 "sequence_count": 2048, 00:18:07.895 "buf_count": 2048 00:18:07.895 } 00:18:07.895 } 00:18:07.895 ] 00:18:07.895 }, 00:18:07.895 { 00:18:07.895 "subsystem": "bdev", 00:18:07.895 "config": [ 00:18:07.895 { 00:18:07.895 "method": "bdev_set_options", 00:18:07.895 "params": { 00:18:07.895 "bdev_io_pool_size": 65535, 00:18:07.895 "bdev_io_cache_size": 256, 00:18:07.895 "bdev_auto_examine": true, 00:18:07.895 "iobuf_small_cache_size": 128, 00:18:07.895 "iobuf_large_cache_size": 16 00:18:07.895 } 00:18:07.895 }, 00:18:07.895 { 00:18:07.895 "method": "bdev_raid_set_options", 00:18:07.895 "params": { 00:18:07.895 "process_window_size_kb": 1024, 00:18:07.895 "process_max_bandwidth_mb_sec": 0 00:18:07.895 } 00:18:07.895 }, 00:18:07.895 { 00:18:07.895 "method": "bdev_iscsi_set_options", 00:18:07.895 "params": { 00:18:07.895 "timeout_sec": 30 00:18:07.895 } 00:18:07.895 }, 00:18:07.895 { 00:18:07.895 "method": "bdev_nvme_set_options", 00:18:07.895 "params": { 00:18:07.895 "action_on_timeout": "none", 00:18:07.895 "timeout_us": 0, 00:18:07.895 "timeout_admin_us": 0, 00:18:07.895 "keep_alive_timeout_ms": 10000, 00:18:07.895 "arbitration_burst": 0, 00:18:07.895 "low_priority_weight": 0, 00:18:07.895 "medium_priority_weight": 0, 00:18:07.895 "high_priority_weight": 0, 00:18:07.895 "nvme_adminq_poll_period_us": 10000, 00:18:07.895 "nvme_ioq_poll_period_us": 0, 00:18:07.895 "io_queue_requests": 512, 00:18:07.895 "delay_cmd_submit": true, 00:18:07.895 "transport_retry_count": 4, 00:18:07.895 "bdev_retry_count": 3, 00:18:07.895 "transport_ack_timeout": 0, 00:18:07.895 "ctrlr_loss_timeout_sec": 0, 00:18:07.895 "reconnect_delay_sec": 0, 00:18:07.895 "fast_io_fail_timeout_sec": 0, 00:18:07.895 "disable_auto_failback": false, 00:18:07.895 "generate_uuids": false, 00:18:07.895 "transport_tos": 0, 00:18:07.895 "nvme_error_stat": false, 00:18:07.895 "rdma_srq_size": 0, 00:18:07.895 "io_path_stat": false, 00:18:07.895 "allow_accel_sequence": false, 00:18:07.895 "rdma_max_cq_size": 0, 00:18:07.895 "rdma_cm_event_timeout_ms": 0, 00:18:07.895 "dhchap_digests": [ 00:18:07.895 "sha256", 00:18:07.895 "sha384", 00:18:07.895 "sha512" 00:18:07.895 ], 00:18:07.895 "dhchap_dhgroups": [ 00:18:07.895 "null", 00:18:07.895 "ffdhe2048", 00:18:07.895 "ffdhe3072", 00:18:07.895 "ffdhe4096", 00:18:07.895 "ffdhe6144", 00:18:07.895 "ffdhe8192" 00:18:07.895 ] 00:18:07.895 } 00:18:07.895 }, 00:18:07.895 { 00:18:07.895 "method": "bdev_nvme_attach_controller", 00:18:07.895 "params": { 00:18:07.895 "name": "nvme0", 00:18:07.895 "trtype": "TCP", 00:18:07.895 "adrfam": "IPv4", 00:18:07.895 "traddr": "10.0.0.3", 00:18:07.895 "trsvcid": "4420", 00:18:07.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.895 "prchk_reftag": false, 00:18:07.895 "prchk_guard": false, 00:18:07.895 "ctrlr_loss_timeout_sec": 0, 00:18:07.895 "reconnect_delay_sec": 0, 00:18:07.895 "fast_io_fail_timeout_sec": 0, 00:18:07.895 "psk": "key0", 00:18:07.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.895 "hdgst": false, 00:18:07.895 "ddgst": false, 00:18:07.895 "multipath": "multipath" 00:18:07.895 } 00:18:07.895 }, 00:18:07.895 { 00:18:07.895 "method": "bdev_nvme_set_hotplug", 00:18:07.895 "params": { 00:18:07.895 "period_us": 100000, 00:18:07.895 "enable": false 00:18:07.895 } 00:18:07.895 }, 00:18:07.895 { 00:18:07.896 "method": "bdev_enable_histogram", 00:18:07.896 "params": { 00:18:07.896 "name": "nvme0n1", 00:18:07.896 "enable": true 00:18:07.896 } 00:18:07.896 }, 00:18:07.896 { 00:18:07.896 "method": "bdev_wait_for_examine" 00:18:07.896 } 00:18:07.896 ] 00:18:07.896 }, 00:18:07.896 { 00:18:07.896 "subsystem": "nbd", 00:18:07.896 "config": [] 00:18:07.896 } 00:18:07.896 ] 00:18:07.896 }' 00:18:07.896 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:07.896 [2024-11-17 08:18:12.770743] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:07.896 [2024-11-17 08:18:12.770866] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75716 ] 00:18:08.155 [2024-11-17 08:18:12.956048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.155 [2024-11-17 08:18:13.062529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.414 [2024-11-17 08:18:13.346966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:08.673 [2024-11-17 08:18:13.472376] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.673 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.673 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:08.673 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:08.673 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:08.932 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.932 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:08.932 Running I/O for 1 seconds... 00:18:10.312 4204.00 IOPS, 16.42 MiB/s 00:18:10.312 Latency(us) 00:18:10.312 [2024-11-17T08:18:15.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.312 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:10.312 Verification LBA range: start 0x0 length 0x2000 00:18:10.312 nvme0n1 : 1.01 4270.06 16.68 0.00 0.00 29773.39 5106.02 23792.99 00:18:10.312 [2024-11-17T08:18:15.324Z] =================================================================================================================== 00:18:10.312 [2024-11-17T08:18:15.324Z] Total : 4270.06 16.68 0.00 0.00 29773.39 5106.02 23792.99 00:18:10.312 { 00:18:10.312 "results": [ 00:18:10.312 { 00:18:10.312 "job": "nvme0n1", 00:18:10.312 "core_mask": "0x2", 00:18:10.312 "workload": "verify", 00:18:10.312 "status": "finished", 00:18:10.312 "verify_range": { 00:18:10.312 "start": 0, 00:18:10.312 "length": 8192 00:18:10.312 }, 00:18:10.312 "queue_depth": 128, 00:18:10.312 "io_size": 4096, 00:18:10.312 "runtime": 1.01474, 00:18:10.312 "iops": 4270.059325541518, 00:18:10.312 "mibps": 16.679919240396554, 00:18:10.312 "io_failed": 0, 00:18:10.312 "io_timeout": 0, 00:18:10.312 "avg_latency_us": 29773.38789619591, 00:18:10.312 "min_latency_us": 5106.017670682731, 00:18:10.312 "max_latency_us": 23792.98955823293 00:18:10.312 } 00:18:10.312 ], 00:18:10.312 "core_count": 1 00:18:10.312 } 00:18:10.312 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:10.312 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:10.312 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:10.312 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:10.312 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:10.312 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:10.312 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:10.312 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:10.312 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:10.312 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:10.312 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:10.312 nvmf_trace.0 00:18:10.312 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:10.312 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 75716 00:18:10.312 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75716 ']' 00:18:10.312 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75716 00:18:10.312 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.313 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.313 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75716 00:18:10.313 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:10.313 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:10.313 killing process with pid 75716 00:18:10.313 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75716' 00:18:10.313 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75716 00:18:10.313 Received shutdown signal, test time was about 1.000000 seconds 00:18:10.313 00:18:10.313 Latency(us) 00:18:10.313 [2024-11-17T08:18:15.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.313 [2024-11-17T08:18:15.325Z] =================================================================================================================== 00:18:10.313 [2024-11-17T08:18:15.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.313 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75716 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:11.250 rmmod nvme_tcp 00:18:11.250 rmmod nvme_fabrics 00:18:11.250 rmmod nvme_keyring 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 75684 ']' 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 75684 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75684 ']' 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75684 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75684 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.250 killing process with pid 75684 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75684' 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75684 00:18:11.250 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75684 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:12.630 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.IOKwJ3dia6 /tmp/tmp.D5BrlyeIpd /tmp/tmp.qQ8tnfd2jw 00:18:12.889 00:18:12.889 real 1m45.453s 00:18:12.889 user 2m36.759s 00:18:12.889 sys 0m35.442s 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.889 ************************************ 00:18:12.889 END TEST nvmf_tls 00:18:12.889 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.889 ************************************ 00:18:13.149 08:18:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:13.149 08:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.149 08:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.149 08:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:13.149 ************************************ 00:18:13.149 START TEST nvmf_fips 00:18:13.149 ************************************ 00:18:13.149 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:13.149 * Looking for test storage... 00:18:13.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:13.149 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:13.149 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:13.149 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:13.149 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:13.149 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.149 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.149 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.149 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.149 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.149 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.150 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:13.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.410 --rc genhtml_branch_coverage=1 00:18:13.410 --rc genhtml_function_coverage=1 00:18:13.410 --rc genhtml_legend=1 00:18:13.410 --rc geninfo_all_blocks=1 00:18:13.410 --rc geninfo_unexecuted_blocks=1 00:18:13.410 00:18:13.410 ' 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:13.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.410 --rc genhtml_branch_coverage=1 00:18:13.410 --rc genhtml_function_coverage=1 00:18:13.410 --rc genhtml_legend=1 00:18:13.410 --rc geninfo_all_blocks=1 00:18:13.410 --rc geninfo_unexecuted_blocks=1 00:18:13.410 00:18:13.410 ' 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:13.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.410 --rc genhtml_branch_coverage=1 00:18:13.410 --rc genhtml_function_coverage=1 00:18:13.410 --rc genhtml_legend=1 00:18:13.410 --rc geninfo_all_blocks=1 00:18:13.410 --rc geninfo_unexecuted_blocks=1 00:18:13.410 00:18:13.410 ' 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:13.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.410 --rc genhtml_branch_coverage=1 00:18:13.410 --rc genhtml_function_coverage=1 00:18:13.410 --rc genhtml_legend=1 00:18:13.410 --rc geninfo_all_blocks=1 00:18:13.410 --rc geninfo_unexecuted_blocks=1 00:18:13.410 00:18:13.410 ' 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.410 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.411 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:13.411 Error setting digest 00:18:13.411 4092082D0E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:13.411 4092082D0E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:13.411 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.412 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:13.671 Cannot find device "nvmf_init_br" 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:13.671 Cannot find device "nvmf_init_br2" 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:13.671 Cannot find device "nvmf_tgt_br" 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.671 Cannot find device "nvmf_tgt_br2" 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:13.671 Cannot find device "nvmf_init_br" 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:13.671 Cannot find device "nvmf_init_br2" 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:13.671 Cannot find device "nvmf_tgt_br" 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:13.671 Cannot find device "nvmf_tgt_br2" 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:13.671 Cannot find device "nvmf_br" 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:13.671 Cannot find device "nvmf_init_if" 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:13.671 Cannot find device "nvmf_init_if2" 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:13.671 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:13.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:13.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:18:13.931 00:18:13.931 --- 10.0.0.3 ping statistics --- 00:18:13.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.931 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:13.931 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:13.931 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:18:13.931 00:18:13.931 --- 10.0.0.4 ping statistics --- 00:18:13.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.931 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:13.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:18:13.931 00:18:13.931 --- 10.0.0.1 ping statistics --- 00:18:13.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.931 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:18:13.931 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:13.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:18:13.932 00:18:13.932 --- 10.0.0.2 ping statistics --- 00:18:13.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.932 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=76064 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 76064 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 76064 ']' 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.932 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:14.191 [2024-11-17 08:18:19.041257] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:14.191 [2024-11-17 08:18:19.041629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.450 [2024-11-17 08:18:19.230718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.450 [2024-11-17 08:18:19.373506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.450 [2024-11-17 08:18:19.373561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.450 [2024-11-17 08:18:19.373580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.450 [2024-11-17 08:18:19.373592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.450 [2024-11-17 08:18:19.373605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.451 [2024-11-17 08:18:19.374851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.710 [2024-11-17 08:18:19.626181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:14.970 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.970 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:14.970 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.970 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.970 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:14.970 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.970 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:14.970 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:14.971 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:14.971 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.5IK 00:18:14.971 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:14.971 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.5IK 00:18:14.971 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.5IK 00:18:14.971 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.5IK 00:18:14.971 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.230 [2024-11-17 08:18:20.180111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.230 [2024-11-17 08:18:20.195986] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.230 [2024-11-17 08:18:20.196285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:15.489 malloc0 00:18:15.489 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.489 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=76100 00:18:15.489 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:15.489 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 76100 /var/tmp/bdevperf.sock 00:18:15.489 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 76100 ']' 00:18:15.489 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.489 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.489 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.489 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.489 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:15.489 [2024-11-17 08:18:20.438195] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:15.489 [2024-11-17 08:18:20.438507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76100 ] 00:18:15.748 [2024-11-17 08:18:20.622222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.748 [2024-11-17 08:18:20.724406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.007 [2024-11-17 08:18:20.924096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:16.266 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.266 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:16.266 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.5IK 00:18:16.525 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:16.784 [2024-11-17 08:18:21.594413] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.784 TLSTESTn1 00:18:16.784 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:17.043 Running I/O for 10 seconds... 00:18:18.919 4121.00 IOPS, 16.10 MiB/s [2024-11-17T08:18:24.866Z] 4208.50 IOPS, 16.44 MiB/s [2024-11-17T08:18:25.804Z] 4243.00 IOPS, 16.57 MiB/s [2024-11-17T08:18:27.184Z] 4243.50 IOPS, 16.58 MiB/s [2024-11-17T08:18:28.120Z] 4253.80 IOPS, 16.62 MiB/s [2024-11-17T08:18:29.058Z] 4260.33 IOPS, 16.64 MiB/s [2024-11-17T08:18:30.006Z] 4262.71 IOPS, 16.65 MiB/s [2024-11-17T08:18:30.945Z] 4267.62 IOPS, 16.67 MiB/s [2024-11-17T08:18:31.883Z] 4261.44 IOPS, 16.65 MiB/s [2024-11-17T08:18:31.883Z] 4259.20 IOPS, 16.64 MiB/s 00:18:26.871 Latency(us) 00:18:26.871 [2024-11-17T08:18:31.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.871 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:26.871 Verification LBA range: start 0x0 length 0x2000 00:18:26.871 TLSTESTn1 : 10.01 4265.86 16.66 0.00 0.00 29964.34 3447.88 26003.84 00:18:26.871 [2024-11-17T08:18:31.883Z] =================================================================================================================== 00:18:26.871 [2024-11-17T08:18:31.883Z] Total : 4265.86 16.66 0.00 0.00 29964.34 3447.88 26003.84 00:18:26.871 { 00:18:26.871 "results": [ 00:18:26.871 { 00:18:26.871 "job": "TLSTESTn1", 00:18:26.871 "core_mask": "0x4", 00:18:26.871 "workload": "verify", 00:18:26.871 "status": "finished", 00:18:26.871 "verify_range": { 00:18:26.871 "start": 0, 00:18:26.871 "length": 8192 00:18:26.871 }, 00:18:26.871 "queue_depth": 128, 00:18:26.871 "io_size": 4096, 00:18:26.871 "runtime": 10.014153, 00:18:26.871 "iops": 4265.862524768695, 00:18:26.871 "mibps": 16.663525487377715, 00:18:26.871 "io_failed": 0, 00:18:26.871 "io_timeout": 0, 00:18:26.871 "avg_latency_us": 29964.3418017678, 00:18:26.871 "min_latency_us": 3447.877911646586, 00:18:26.871 "max_latency_us": 26003.842570281126 00:18:26.871 } 00:18:26.871 ], 00:18:26.871 "core_count": 1 00:18:26.871 } 00:18:26.871 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:26.871 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:26.871 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:18:26.871 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:18:26.871 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:26.871 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:26.871 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:26.871 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:26.871 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:26.871 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:26.871 nvmf_trace.0 00:18:27.131 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:18:27.131 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 76100 00:18:27.131 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 76100 ']' 00:18:27.131 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 76100 00:18:27.131 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:27.131 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.131 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76100 00:18:27.131 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.131 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.131 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76100' 00:18:27.131 killing process with pid 76100 00:18:27.131 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.131 00:18:27.131 Latency(us) 00:18:27.131 [2024-11-17T08:18:32.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.131 [2024-11-17T08:18:32.143Z] =================================================================================================================== 00:18:27.131 [2024-11-17T08:18:32.143Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.131 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 76100 00:18:27.131 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 76100 00:18:28.068 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:28.068 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:28.068 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:28.328 rmmod nvme_tcp 00:18:28.328 rmmod nvme_fabrics 00:18:28.328 rmmod nvme_keyring 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 76064 ']' 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 76064 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 76064 ']' 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 76064 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76064 00:18:28.328 killing process with pid 76064 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76064' 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 76064 00:18:28.328 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 76064 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:29.763 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:29.764 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:29.764 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:29.764 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.5IK 00:18:30.024 ************************************ 00:18:30.024 END TEST nvmf_fips 00:18:30.024 ************************************ 00:18:30.024 00:18:30.024 real 0m16.974s 00:18:30.024 user 0m21.300s 00:18:30.024 sys 0m7.202s 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:30.024 ************************************ 00:18:30.024 START TEST nvmf_control_msg_list 00:18:30.024 ************************************ 00:18:30.024 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:30.285 * Looking for test storage... 00:18:30.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:30.285 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:30.285 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:18:30.285 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:30.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.286 --rc genhtml_branch_coverage=1 00:18:30.286 --rc genhtml_function_coverage=1 00:18:30.286 --rc genhtml_legend=1 00:18:30.286 --rc geninfo_all_blocks=1 00:18:30.286 --rc geninfo_unexecuted_blocks=1 00:18:30.286 00:18:30.286 ' 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:30.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.286 --rc genhtml_branch_coverage=1 00:18:30.286 --rc genhtml_function_coverage=1 00:18:30.286 --rc genhtml_legend=1 00:18:30.286 --rc geninfo_all_blocks=1 00:18:30.286 --rc geninfo_unexecuted_blocks=1 00:18:30.286 00:18:30.286 ' 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:30.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.286 --rc genhtml_branch_coverage=1 00:18:30.286 --rc genhtml_function_coverage=1 00:18:30.286 --rc genhtml_legend=1 00:18:30.286 --rc geninfo_all_blocks=1 00:18:30.286 --rc geninfo_unexecuted_blocks=1 00:18:30.286 00:18:30.286 ' 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:30.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.286 --rc genhtml_branch_coverage=1 00:18:30.286 --rc genhtml_function_coverage=1 00:18:30.286 --rc genhtml_legend=1 00:18:30.286 --rc geninfo_all_blocks=1 00:18:30.286 --rc geninfo_unexecuted_blocks=1 00:18:30.286 00:18:30.286 ' 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:30.286 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:30.286 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:30.287 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:30.547 Cannot find device "nvmf_init_br" 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:30.547 Cannot find device "nvmf_init_br2" 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:30.547 Cannot find device "nvmf_tgt_br" 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:30.547 Cannot find device "nvmf_tgt_br2" 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:30.547 Cannot find device "nvmf_init_br" 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:30.547 Cannot find device "nvmf_init_br2" 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:30.547 Cannot find device "nvmf_tgt_br" 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:30.547 Cannot find device "nvmf_tgt_br2" 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:30.547 Cannot find device "nvmf_br" 00:18:30.547 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:30.548 Cannot find device "nvmf_init_if" 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:30.548 Cannot find device "nvmf_init_if2" 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:30.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:30.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:30.548 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:30.808 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:30.808 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:18:30.808 00:18:30.808 --- 10.0.0.3 ping statistics --- 00:18:30.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.808 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:18:30.808 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:30.808 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:30.808 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.119 ms 00:18:30.808 00:18:30.808 --- 10.0.0.4 ping statistics --- 00:18:30.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.808 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:31.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:18:31.068 00:18:31.068 --- 10.0.0.1 ping statistics --- 00:18:31.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.068 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:31.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:18:31.068 00:18:31.068 --- 10.0.0.2 ping statistics --- 00:18:31.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.068 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=76517 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 76517 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 76517 ']' 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.068 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:31.068 [2024-11-17 08:18:35.990393] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:31.068 [2024-11-17 08:18:35.990583] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.328 [2024-11-17 08:18:36.170259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.328 [2024-11-17 08:18:36.301084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.328 [2024-11-17 08:18:36.301145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.328 [2024-11-17 08:18:36.301162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.328 [2024-11-17 08:18:36.301183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.328 [2024-11-17 08:18:36.301196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.328 [2024-11-17 08:18:36.302637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.588 [2024-11-17 08:18:36.552625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:31.848 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.848 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:18:31.848 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.848 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.848 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:31.848 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.848 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:31.848 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:31.848 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:18:31.848 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.848 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:32.108 [2024-11-17 08:18:36.864821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:32.108 Malloc0 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.108 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:32.108 [2024-11-17 08:18:36.951548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:32.109 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.109 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=76548 00:18:32.109 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:32.109 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=76549 00:18:32.109 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:32.109 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=76550 00:18:32.109 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:32.109 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 76548 00:18:32.368 [2024-11-17 08:18:37.212701] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:32.368 [2024-11-17 08:18:37.212959] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:32.368 [2024-11-17 08:18:37.223650] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:33.307 Initializing NVMe Controllers 00:18:33.307 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:33.307 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:18:33.307 Initialization complete. Launching workers. 00:18:33.307 ======================================================== 00:18:33.307 Latency(us) 00:18:33.307 Device Information : IOPS MiB/s Average min max 00:18:33.307 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3967.00 15.50 251.81 171.20 1025.15 00:18:33.307 ======================================================== 00:18:33.307 Total : 3967.00 15.50 251.81 171.20 1025.15 00:18:33.307 00:18:33.307 Initializing NVMe Controllers 00:18:33.307 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:33.307 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:18:33.307 Initialization complete. Launching workers. 00:18:33.307 ======================================================== 00:18:33.307 Latency(us) 00:18:33.307 Device Information : IOPS MiB/s Average min max 00:18:33.307 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3960.99 15.47 252.22 142.78 1017.38 00:18:33.307 ======================================================== 00:18:33.307 Total : 3960.99 15.47 252.22 142.78 1017.38 00:18:33.307 00:18:33.307 Initializing NVMe Controllers 00:18:33.307 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:33.307 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:18:33.307 Initialization complete. Launching workers. 00:18:33.307 ======================================================== 00:18:33.307 Latency(us) 00:18:33.307 Device Information : IOPS MiB/s Average min max 00:18:33.307 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4006.00 15.65 249.28 112.14 1254.01 00:18:33.307 ======================================================== 00:18:33.307 Total : 4006.00 15.65 249.28 112.14 1254.01 00:18:33.307 00:18:33.307 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 76549 00:18:33.307 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 76550 00:18:33.307 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:33.307 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:18:33.307 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:33.307 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:33.567 rmmod nvme_tcp 00:18:33.567 rmmod nvme_fabrics 00:18:33.567 rmmod nvme_keyring 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 76517 ']' 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 76517 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 76517 ']' 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 76517 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76517 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.567 killing process with pid 76517 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76517' 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 76517 00:18:33.567 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 76517 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:34.948 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:35.208 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:35.208 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:18:35.208 00:18:35.208 real 0m5.126s 00:18:35.208 user 0m6.432s 00:18:35.208 sys 0m2.286s 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:35.208 ************************************ 00:18:35.208 END TEST nvmf_control_msg_list 00:18:35.208 ************************************ 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.208 ************************************ 00:18:35.208 START TEST nvmf_wait_for_buf 00:18:35.208 ************************************ 00:18:35.208 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:35.469 * Looking for test storage... 00:18:35.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:18:35.469 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:35.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.470 --rc genhtml_branch_coverage=1 00:18:35.470 --rc genhtml_function_coverage=1 00:18:35.470 --rc genhtml_legend=1 00:18:35.470 --rc geninfo_all_blocks=1 00:18:35.470 --rc geninfo_unexecuted_blocks=1 00:18:35.470 00:18:35.470 ' 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:35.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.470 --rc genhtml_branch_coverage=1 00:18:35.470 --rc genhtml_function_coverage=1 00:18:35.470 --rc genhtml_legend=1 00:18:35.470 --rc geninfo_all_blocks=1 00:18:35.470 --rc geninfo_unexecuted_blocks=1 00:18:35.470 00:18:35.470 ' 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:35.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.470 --rc genhtml_branch_coverage=1 00:18:35.470 --rc genhtml_function_coverage=1 00:18:35.470 --rc genhtml_legend=1 00:18:35.470 --rc geninfo_all_blocks=1 00:18:35.470 --rc geninfo_unexecuted_blocks=1 00:18:35.470 00:18:35.470 ' 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:35.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.470 --rc genhtml_branch_coverage=1 00:18:35.470 --rc genhtml_function_coverage=1 00:18:35.470 --rc genhtml_legend=1 00:18:35.470 --rc geninfo_all_blocks=1 00:18:35.470 --rc geninfo_unexecuted_blocks=1 00:18:35.470 00:18:35.470 ' 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:35.470 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.470 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:35.731 Cannot find device "nvmf_init_br" 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:35.731 Cannot find device "nvmf_init_br2" 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:35.731 Cannot find device "nvmf_tgt_br" 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:35.731 Cannot find device "nvmf_tgt_br2" 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:35.731 Cannot find device "nvmf_init_br" 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:35.731 Cannot find device "nvmf_init_br2" 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:35.731 Cannot find device "nvmf_tgt_br" 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:35.731 Cannot find device "nvmf_tgt_br2" 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:35.731 Cannot find device "nvmf_br" 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:35.731 Cannot find device "nvmf_init_if" 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:35.731 Cannot find device "nvmf_init_if2" 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:18:35.731 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:35.732 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:35.732 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:35.732 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:35.992 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:35.992 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:35.992 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:35.992 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:35.992 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:35.992 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:35.993 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:36.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:36.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:18:36.253 00:18:36.253 --- 10.0.0.3 ping statistics --- 00:18:36.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.253 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:36.253 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:36.253 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:18:36.253 00:18:36.253 --- 10.0.0.4 ping statistics --- 00:18:36.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.253 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:36.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:18:36.253 00:18:36.253 --- 10.0.0.1 ping statistics --- 00:18:36.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.253 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:36.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:18:36.253 00:18:36.253 --- 10.0.0.2 ping statistics --- 00:18:36.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.253 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=76803 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 76803 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 76803 ']' 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.253 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:36.253 [2024-11-17 08:18:41.222293] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:36.253 [2024-11-17 08:18:41.222480] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.514 [2024-11-17 08:18:41.410708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.773 [2024-11-17 08:18:41.556187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.773 [2024-11-17 08:18:41.556243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.773 [2024-11-17 08:18:41.556276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.773 [2024-11-17 08:18:41.556297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.773 [2024-11-17 08:18:41.556311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.773 [2024-11-17 08:18:41.557679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.033 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.033 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:18:37.033 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.033 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:37.033 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.293 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.293 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:37.294 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:37.294 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:18:37.294 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.294 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.294 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.294 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:18:37.294 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.294 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.294 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.294 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:18:37.294 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.294 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.294 [2024-11-17 08:18:42.252853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.553 Malloc0 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.553 [2024-11-17 08:18:42.454528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.553 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.554 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.554 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:37.554 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.554 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.554 [2024-11-17 08:18:42.486654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:37.554 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.554 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:37.812 [2024-11-17 08:18:42.744663] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:39.190 Initializing NVMe Controllers 00:18:39.190 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:39.190 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:18:39.190 Initialization complete. Launching workers. 00:18:39.190 ======================================================== 00:18:39.190 Latency(us) 00:18:39.190 Device Information : IOPS MiB/s Average min max 00:18:39.190 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.18 62.40 8011.99 5845.00 10031.17 00:18:39.190 ======================================================== 00:18:39.190 Total : 499.18 62.40 8011.99 5845.00 10031.17 00:18:39.190 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:39.190 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:39.190 rmmod nvme_tcp 00:18:39.450 rmmod nvme_fabrics 00:18:39.450 rmmod nvme_keyring 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 76803 ']' 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 76803 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 76803 ']' 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 76803 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76803 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76803' 00:18:39.450 killing process with pid 76803 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 76803 00:18:39.450 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 76803 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:18:40.831 00:18:40.831 real 0m5.555s 00:18:40.831 user 0m4.397s 00:18:40.831 sys 0m1.422s 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:40.831 ************************************ 00:18:40.831 END TEST nvmf_wait_for_buf 00:18:40.831 ************************************ 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:40.831 ************************************ 00:18:40.831 START TEST nvmf_fuzz 00:18:40.831 ************************************ 00:18:40.831 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:41.092 * Looking for test storage... 00:18:41.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:41.092 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:41.092 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:18:41.092 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.092 --rc genhtml_branch_coverage=1 00:18:41.092 --rc genhtml_function_coverage=1 00:18:41.092 --rc genhtml_legend=1 00:18:41.092 --rc geninfo_all_blocks=1 00:18:41.092 --rc geninfo_unexecuted_blocks=1 00:18:41.092 00:18:41.092 ' 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.092 --rc genhtml_branch_coverage=1 00:18:41.092 --rc genhtml_function_coverage=1 00:18:41.092 --rc genhtml_legend=1 00:18:41.092 --rc geninfo_all_blocks=1 00:18:41.092 --rc geninfo_unexecuted_blocks=1 00:18:41.092 00:18:41.092 ' 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.092 --rc genhtml_branch_coverage=1 00:18:41.092 --rc genhtml_function_coverage=1 00:18:41.092 --rc genhtml_legend=1 00:18:41.092 --rc geninfo_all_blocks=1 00:18:41.092 --rc geninfo_unexecuted_blocks=1 00:18:41.092 00:18:41.092 ' 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.092 --rc genhtml_branch_coverage=1 00:18:41.092 --rc genhtml_function_coverage=1 00:18:41.092 --rc genhtml_legend=1 00:18:41.092 --rc geninfo_all_blocks=1 00:18:41.092 --rc geninfo_unexecuted_blocks=1 00:18:41.092 00:18:41.092 ' 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.092 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.353 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.353 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.353 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.353 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:18:41.353 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.353 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:41.353 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:41.353 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:41.353 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:41.354 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:41.354 Cannot find device "nvmf_init_br" 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:41.354 Cannot find device "nvmf_init_br2" 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:41.354 Cannot find device "nvmf_tgt_br" 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.354 Cannot find device "nvmf_tgt_br2" 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:41.354 Cannot find device "nvmf_init_br" 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:41.354 Cannot find device "nvmf_init_br2" 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:41.354 Cannot find device "nvmf_tgt_br" 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:41.354 Cannot find device "nvmf_tgt_br2" 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:41.354 Cannot find device "nvmf_br" 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:41.354 Cannot find device "nvmf_init_if" 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:41.354 Cannot find device "nvmf_init_if2" 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:41.354 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:41.614 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:41.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:18:41.615 00:18:41.615 --- 10.0.0.3 ping statistics --- 00:18:41.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.615 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:41.615 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:41.615 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.029 ms 00:18:41.615 00:18:41.615 --- 10.0.0.4 ping statistics --- 00:18:41.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.615 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:18:41.615 00:18:41.615 --- 10.0.0.1 ping statistics --- 00:18:41.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.615 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:41.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:18:41.615 00:18:41.615 --- 10.0.0.2 ping statistics --- 00:18:41.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.615 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77107 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77107 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 77107 ']' 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.615 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.555 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.555 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:42.555 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.555 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.555 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.555 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.555 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:42.555 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.555 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.814 Malloc0 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:18:42.814 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:18:43.383 Shutting down the fuzz application 00:18:43.383 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:43.951 Shutting down the fuzz application 00:18:43.951 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.951 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.951 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.951 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.952 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:43.952 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:43.952 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.952 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:18:43.952 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.952 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:18:43.952 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.952 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:44.212 rmmod nvme_tcp 00:18:44.212 rmmod nvme_fabrics 00:18:44.212 rmmod nvme_keyring 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 77107 ']' 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 77107 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 77107 ']' 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 77107 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77107 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:44.212 killing process with pid 77107 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77107' 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 77107 00:18:44.212 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 77107 00:18:45.590 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:45.590 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:45.590 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:45.590 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:18:45.590 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:18:45.590 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:45.590 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:18:45.590 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:45.591 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:45.591 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:45.591 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:45.591 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:45.591 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:45.591 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:45.591 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:45.591 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:45.591 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:45.850 00:18:45.850 real 0m4.974s 00:18:45.850 user 0m4.783s 00:18:45.850 sys 0m1.258s 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.850 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:45.850 ************************************ 00:18:45.850 END TEST nvmf_fuzz 00:18:45.850 ************************************ 00:18:46.111 08:18:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:46.111 08:18:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:46.111 08:18:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.111 08:18:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.111 ************************************ 00:18:46.111 START TEST nvmf_multiconnection 00:18:46.111 ************************************ 00:18:46.111 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:46.111 * Looking for test storage... 00:18:46.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.111 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:46.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.373 --rc genhtml_branch_coverage=1 00:18:46.373 --rc genhtml_function_coverage=1 00:18:46.373 --rc genhtml_legend=1 00:18:46.373 --rc geninfo_all_blocks=1 00:18:46.373 --rc geninfo_unexecuted_blocks=1 00:18:46.373 00:18:46.373 ' 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:46.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.373 --rc genhtml_branch_coverage=1 00:18:46.373 --rc genhtml_function_coverage=1 00:18:46.373 --rc genhtml_legend=1 00:18:46.373 --rc geninfo_all_blocks=1 00:18:46.373 --rc geninfo_unexecuted_blocks=1 00:18:46.373 00:18:46.373 ' 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:46.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.373 --rc genhtml_branch_coverage=1 00:18:46.373 --rc genhtml_function_coverage=1 00:18:46.373 --rc genhtml_legend=1 00:18:46.373 --rc geninfo_all_blocks=1 00:18:46.373 --rc geninfo_unexecuted_blocks=1 00:18:46.373 00:18:46.373 ' 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:46.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.373 --rc genhtml_branch_coverage=1 00:18:46.373 --rc genhtml_function_coverage=1 00:18:46.373 --rc genhtml_legend=1 00:18:46.373 --rc geninfo_all_blocks=1 00:18:46.373 --rc geninfo_unexecuted_blocks=1 00:18:46.373 00:18:46.373 ' 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.373 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.374 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:46.374 Cannot find device "nvmf_init_br" 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:46.374 Cannot find device "nvmf_init_br2" 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:46.374 Cannot find device "nvmf_tgt_br" 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:46.374 Cannot find device "nvmf_tgt_br2" 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:46.374 Cannot find device "nvmf_init_br" 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:46.374 Cannot find device "nvmf_init_br2" 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:46.374 Cannot find device "nvmf_tgt_br" 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:46.374 Cannot find device "nvmf_tgt_br2" 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:46.374 Cannot find device "nvmf_br" 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:46.374 Cannot find device "nvmf_init_if" 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:18:46.374 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:46.635 Cannot find device "nvmf_init_if2" 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:46.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:46.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:46.635 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:46.895 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:46.895 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:46.895 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:46.895 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:46.895 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:46.895 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:46.895 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:46.895 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:46.895 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:46.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:46.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:18:46.895 00:18:46.895 --- 10.0.0.3 ping statistics --- 00:18:46.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.895 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:18:46.895 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:46.895 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:46.895 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:18:46.895 00:18:46.895 --- 10.0.0.4 ping statistics --- 00:18:46.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.895 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:46.895 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:46.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:18:46.895 00:18:46.895 --- 10.0.0.1 ping statistics --- 00:18:46.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.896 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:46.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:18:46.896 00:18:46.896 --- 10.0.0.2 ping statistics --- 00:18:46.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.896 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=77373 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 77373 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 77373 ']' 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.896 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.155 [2024-11-17 08:18:51.914882] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:47.155 [2024-11-17 08:18:51.915016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.155 [2024-11-17 08:18:52.104808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:47.415 [2024-11-17 08:18:52.247614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.415 [2024-11-17 08:18:52.247674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.415 [2024-11-17 08:18:52.247691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.415 [2024-11-17 08:18:52.247702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.415 [2024-11-17 08:18:52.247716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.415 [2024-11-17 08:18:52.250146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.415 [2024-11-17 08:18:52.250352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.415 [2024-11-17 08:18:52.251192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.415 [2024-11-17 08:18:52.251227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:47.674 [2024-11-17 08:18:52.508076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.940 [2024-11-17 08:18:52.774293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.940 Malloc1 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.940 [2024-11-17 08:18:52.928586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.940 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.201 Malloc2 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.201 Malloc3 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.201 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.461 Malloc4 00:18:48.461 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.461 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:48.461 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.461 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.461 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.462 Malloc5 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.462 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.722 Malloc6 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.722 Malloc7 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.722 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.982 Malloc8 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.982 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.983 Malloc9 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.983 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.243 Malloc10 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.243 Malloc11 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.243 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:18:49.503 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:49.503 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:49.503 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.503 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:49.503 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:51.411 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:51.411 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:51.411 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:18:51.671 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:51.671 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.671 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:51.671 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:51.671 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:18:51.671 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:51.671 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:51.671 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.671 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:51.671 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:53.579 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:53.579 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:53.579 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:18:53.839 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:53.839 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.839 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:53.839 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.839 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:18:53.839 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:53.839 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:53.839 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:53.839 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:53.839 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:55.748 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:55.748 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:55.748 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:18:56.008 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:56.008 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.008 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:56.008 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:56.008 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:18:56.008 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:56.008 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:56.008 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.008 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:56.008 08:19:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:58.560 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:58.560 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:58.560 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:18:58.560 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:58.560 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.560 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:58.560 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.560 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:18:58.560 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:58.560 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:58.560 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.560 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:58.560 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:00.532 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:00.532 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:00.532 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:19:00.532 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:00.532 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.532 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:00.532 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.533 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:19:00.533 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:19:00.533 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:00.533 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.533 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:00.533 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:02.439 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:02.439 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:02.439 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:19:02.439 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:02.439 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:02.439 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:02.439 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:02.439 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:19:02.698 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:19:02.698 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:02.698 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.698 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:02.698 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:04.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:04.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:04.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:19:04.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:04.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:04.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:19:04.865 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:04.865 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:04.865 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.865 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:04.865 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:06.773 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:06.773 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:06.773 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:19:06.773 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:06.773 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.773 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:06.773 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:06.773 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:19:07.033 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:07.033 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:07.033 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.033 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:07.033 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:08.943 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:08.943 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:08.943 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:19:08.943 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:08.943 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:08.943 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:08.943 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:08.943 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:19:09.202 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:09.202 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:09.202 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:09.202 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:09.202 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:11.111 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:11.111 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:11.111 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:19:11.111 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:11.111 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:11.111 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:11.111 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:11.111 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:19:11.403 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:11.403 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:11.403 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:11.403 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:11.403 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:13.309 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:13.309 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:13.309 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:19:13.309 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:13.309 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:13.309 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:13.309 08:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:13.568 [global] 00:19:13.568 thread=1 00:19:13.568 invalidate=1 00:19:13.568 rw=read 00:19:13.568 time_based=1 00:19:13.568 runtime=10 00:19:13.568 ioengine=libaio 00:19:13.568 direct=1 00:19:13.568 bs=262144 00:19:13.568 iodepth=64 00:19:13.568 norandommap=1 00:19:13.568 numjobs=1 00:19:13.568 00:19:13.568 [job0] 00:19:13.568 filename=/dev/nvme0n1 00:19:13.568 [job1] 00:19:13.568 filename=/dev/nvme10n1 00:19:13.568 [job2] 00:19:13.568 filename=/dev/nvme1n1 00:19:13.568 [job3] 00:19:13.568 filename=/dev/nvme2n1 00:19:13.568 [job4] 00:19:13.568 filename=/dev/nvme3n1 00:19:13.568 [job5] 00:19:13.568 filename=/dev/nvme4n1 00:19:13.568 [job6] 00:19:13.568 filename=/dev/nvme5n1 00:19:13.568 [job7] 00:19:13.568 filename=/dev/nvme6n1 00:19:13.568 [job8] 00:19:13.568 filename=/dev/nvme7n1 00:19:13.568 [job9] 00:19:13.568 filename=/dev/nvme8n1 00:19:13.568 [job10] 00:19:13.568 filename=/dev/nvme9n1 00:19:13.828 Could not set queue depth (nvme0n1) 00:19:13.828 Could not set queue depth (nvme10n1) 00:19:13.828 Could not set queue depth (nvme1n1) 00:19:13.828 Could not set queue depth (nvme2n1) 00:19:13.828 Could not set queue depth (nvme3n1) 00:19:13.828 Could not set queue depth (nvme4n1) 00:19:13.828 Could not set queue depth (nvme5n1) 00:19:13.828 Could not set queue depth (nvme6n1) 00:19:13.828 Could not set queue depth (nvme7n1) 00:19:13.828 Could not set queue depth (nvme8n1) 00:19:13.828 Could not set queue depth (nvme9n1) 00:19:13.828 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:13.828 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:13.828 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:13.828 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:13.828 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:13.828 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:13.828 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:13.828 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:13.828 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:13.828 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:13.828 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:13.828 fio-3.35 00:19:13.828 Starting 11 threads 00:19:26.050 00:19:26.050 job0: (groupid=0, jobs=1): err= 0: pid=77845: Sun Nov 17 08:19:29 2024 00:19:26.050 read: IOPS=185, BW=46.3MiB/s (48.5MB/s)(469MiB/10122msec) 00:19:26.050 slat (usec): min=17, max=127587, avg=5233.56, stdev=13556.87 00:19:26.051 clat (msec): min=38, max=527, avg=339.42, stdev=62.45 00:19:26.051 lat (msec): min=39, max=527, avg=344.65, stdev=62.46 00:19:26.051 clat percentiles (msec): 00:19:26.051 | 1.00th=[ 65], 5.00th=[ 253], 10.00th=[ 275], 20.00th=[ 309], 00:19:26.051 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 355], 00:19:26.051 | 70.00th=[ 363], 80.00th=[ 384], 90.00th=[ 409], 95.00th=[ 426], 00:19:26.051 | 99.00th=[ 468], 99.50th=[ 477], 99.90th=[ 527], 99.95th=[ 527], 00:19:26.051 | 99.99th=[ 527] 00:19:26.051 bw ( KiB/s): min=40448, max=53248, per=6.32%, avg=46322.75, stdev=2817.56, samples=20 00:19:26.051 iops : min= 158, max= 208, avg=180.85, stdev=11.09, samples=20 00:19:26.051 lat (msec) : 50=0.64%, 100=0.96%, 250=3.09%, 500=95.14%, 750=0.16% 00:19:26.051 cpu : usr=0.11%, sys=1.21%, ctx=379, majf=0, minf=4097 00:19:26.051 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:19:26.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.051 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.051 issued rwts: total=1874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.051 job1: (groupid=0, jobs=1): err= 0: pid=77846: Sun Nov 17 08:19:29 2024 00:19:26.051 read: IOPS=109, BW=27.4MiB/s (28.7MB/s)(279MiB/10178msec) 00:19:26.051 slat (usec): min=33, max=182051, avg=8972.85, stdev=23537.71 00:19:26.051 clat (msec): min=87, max=838, avg=574.08, stdev=167.62 00:19:26.051 lat (msec): min=87, max=838, avg=583.05, stdev=169.38 00:19:26.051 clat percentiles (msec): 00:19:26.051 | 1.00th=[ 115], 5.00th=[ 140], 10.00th=[ 305], 20.00th=[ 510], 00:19:26.051 | 30.00th=[ 550], 40.00th=[ 584], 50.00th=[ 617], 60.00th=[ 651], 00:19:26.051 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 735], 95.00th=[ 751], 00:19:26.051 | 99.00th=[ 785], 99.50th=[ 793], 99.90th=[ 802], 99.95th=[ 835], 00:19:26.051 | 99.99th=[ 835] 00:19:26.051 bw ( KiB/s): min=20480, max=51712, per=3.67%, avg=26923.20, stdev=6791.47, samples=20 00:19:26.051 iops : min= 80, max= 202, avg=105.05, stdev=26.54, samples=20 00:19:26.051 lat (msec) : 100=0.45%, 250=8.70%, 500=9.60%, 750=74.62%, 1000=6.64% 00:19:26.051 cpu : usr=0.09%, sys=0.69%, ctx=243, majf=0, minf=4097 00:19:26.051 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:19:26.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.051 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.051 issued rwts: total=1115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.051 job2: (groupid=0, jobs=1): err= 0: pid=77847: Sun Nov 17 08:19:29 2024 00:19:26.051 read: IOPS=184, BW=46.1MiB/s (48.3MB/s)(466MiB/10115msec) 00:19:26.051 slat (usec): min=29, max=106873, avg=5093.31, stdev=13178.26 00:19:26.051 clat (msec): min=60, max=545, avg=341.06, stdev=47.73 00:19:26.051 lat (msec): min=61, max=546, avg=346.15, stdev=47.36 00:19:26.051 clat percentiles (msec): 00:19:26.051 | 1.00th=[ 174], 5.00th=[ 279], 10.00th=[ 296], 20.00th=[ 313], 00:19:26.051 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 351], 00:19:26.051 | 70.00th=[ 359], 80.00th=[ 368], 90.00th=[ 380], 95.00th=[ 393], 00:19:26.051 | 99.00th=[ 535], 99.50th=[ 542], 99.90th=[ 550], 99.95th=[ 550], 00:19:26.051 | 99.99th=[ 550] 00:19:26.051 bw ( KiB/s): min=19968, max=51097, per=6.29%, avg=46076.90, stdev=6420.13, samples=20 00:19:26.051 iops : min= 78, max= 199, avg=179.80, stdev=25.04, samples=20 00:19:26.051 lat (msec) : 100=0.32%, 250=2.04%, 500=96.14%, 750=1.50% 00:19:26.051 cpu : usr=0.07%, sys=1.18%, ctx=391, majf=0, minf=4097 00:19:26.051 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:19:26.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.051 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.051 issued rwts: total=1865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.051 job3: (groupid=0, jobs=1): err= 0: pid=77848: Sun Nov 17 08:19:29 2024 00:19:26.051 read: IOPS=108, BW=27.2MiB/s (28.5MB/s)(276MiB/10163msec) 00:19:26.051 slat (usec): min=29, max=230964, avg=8506.95, stdev=23851.04 00:19:26.051 clat (msec): min=55, max=836, avg=578.20, stdev=146.25 00:19:26.051 lat (msec): min=56, max=836, avg=586.71, stdev=148.38 00:19:26.051 clat percentiles (msec): 00:19:26.051 | 1.00th=[ 146], 5.00th=[ 169], 10.00th=[ 422], 20.00th=[ 531], 00:19:26.051 | 30.00th=[ 567], 40.00th=[ 592], 50.00th=[ 609], 60.00th=[ 634], 00:19:26.051 | 70.00th=[ 651], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 743], 00:19:26.051 | 99.00th=[ 793], 99.50th=[ 827], 99.90th=[ 835], 99.95th=[ 835], 00:19:26.051 | 99.99th=[ 835] 00:19:26.051 bw ( KiB/s): min=18432, max=43008, per=3.64%, avg=26659.25, stdev=5069.03, samples=20 00:19:26.051 iops : min= 72, max= 168, avg=103.90, stdev=19.83, samples=20 00:19:26.051 lat (msec) : 100=0.45%, 250=6.79%, 500=10.05%, 750=78.01%, 1000=4.71% 00:19:26.051 cpu : usr=0.02%, sys=0.83%, ctx=241, majf=0, minf=4097 00:19:26.051 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:19:26.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.051 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.051 issued rwts: total=1105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.051 job4: (groupid=0, jobs=1): err= 0: pid=77849: Sun Nov 17 08:19:29 2024 00:19:26.051 read: IOPS=111, BW=27.8MiB/s (29.1MB/s)(282MiB/10170msec) 00:19:26.051 slat (usec): min=33, max=178243, avg=8720.01, stdev=24334.68 00:19:26.051 clat (msec): min=44, max=840, avg=566.01, stdev=173.32 00:19:26.051 lat (msec): min=46, max=840, avg=574.73, stdev=175.55 00:19:26.051 clat percentiles (msec): 00:19:26.051 | 1.00th=[ 80], 5.00th=[ 159], 10.00th=[ 228], 20.00th=[ 510], 00:19:26.051 | 30.00th=[ 558], 40.00th=[ 584], 50.00th=[ 617], 60.00th=[ 642], 00:19:26.051 | 70.00th=[ 667], 80.00th=[ 701], 90.00th=[ 726], 95.00th=[ 735], 00:19:26.051 | 99.00th=[ 776], 99.50th=[ 785], 99.90th=[ 802], 99.95th=[ 844], 00:19:26.051 | 99.99th=[ 844] 00:19:26.051 bw ( KiB/s): min=15329, max=65024, per=3.72%, avg=27257.50, stdev=10432.75, samples=20 00:19:26.051 iops : min= 59, max= 254, avg=106.35, stdev=40.83, samples=20 00:19:26.051 lat (msec) : 50=0.09%, 100=3.72%, 250=7.53%, 500=7.26%, 750=78.30% 00:19:26.051 lat (msec) : 1000=3.10% 00:19:26.051 cpu : usr=0.08%, sys=0.83%, ctx=207, majf=0, minf=4097 00:19:26.051 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:19:26.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.051 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.051 issued rwts: total=1129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.051 job5: (groupid=0, jobs=1): err= 0: pid=77850: Sun Nov 17 08:19:29 2024 00:19:26.051 read: IOPS=440, BW=110MiB/s (115MB/s)(1108MiB/10065msec) 00:19:26.051 slat (usec): min=21, max=103221, avg=2251.56, stdev=5638.18 00:19:26.051 clat (msec): min=34, max=277, avg=142.70, stdev=28.98 00:19:26.051 lat (msec): min=34, max=277, avg=144.95, stdev=29.27 00:19:26.051 clat percentiles (msec): 00:19:26.051 | 1.00th=[ 42], 5.00th=[ 79], 10.00th=[ 115], 20.00th=[ 132], 00:19:26.051 | 30.00th=[ 140], 40.00th=[ 144], 50.00th=[ 148], 60.00th=[ 150], 00:19:26.051 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 174], 00:19:26.051 | 99.00th=[ 234], 99.50th=[ 245], 99.90th=[ 264], 99.95th=[ 279], 00:19:26.051 | 99.99th=[ 279] 00:19:26.051 bw ( KiB/s): min=101684, max=139264, per=15.25%, avg=111775.60, stdev=8753.93, samples=20 00:19:26.051 iops : min= 397, max= 544, avg=436.60, stdev=34.20, samples=20 00:19:26.051 lat (msec) : 50=3.36%, 100=3.25%, 250=92.93%, 500=0.45% 00:19:26.051 cpu : usr=0.33%, sys=2.39%, ctx=896, majf=0, minf=4097 00:19:26.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:26.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.051 issued rwts: total=4430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.051 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.051 job6: (groupid=0, jobs=1): err= 0: pid=77851: Sun Nov 17 08:19:29 2024 00:19:26.051 read: IOPS=109, BW=27.4MiB/s (28.8MB/s)(279MiB/10165msec) 00:19:26.051 slat (usec): min=15, max=266729, avg=8766.65, stdev=24608.28 00:19:26.051 clat (msec): min=44, max=777, avg=572.92, stdev=167.34 00:19:26.051 lat (msec): min=46, max=811, avg=581.68, stdev=169.69 00:19:26.051 clat percentiles (msec): 00:19:26.051 | 1.00th=[ 55], 5.00th=[ 148], 10.00th=[ 317], 20.00th=[ 510], 00:19:26.052 | 30.00th=[ 550], 40.00th=[ 584], 50.00th=[ 609], 60.00th=[ 651], 00:19:26.052 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 735], 95.00th=[ 751], 00:19:26.052 | 99.00th=[ 768], 99.50th=[ 776], 99.90th=[ 776], 99.95th=[ 776], 00:19:26.052 | 99.99th=[ 776] 00:19:26.052 bw ( KiB/s): min=16896, max=56832, per=3.67%, avg=26923.10, stdev=8282.22, samples=20 00:19:26.052 iops : min= 66, max= 222, avg=105.00, stdev=32.40, samples=20 00:19:26.052 lat (msec) : 50=0.54%, 100=1.17%, 250=7.44%, 500=8.79%, 750=76.77% 00:19:26.052 lat (msec) : 1000=5.29% 00:19:26.052 cpu : usr=0.06%, sys=0.77%, ctx=244, majf=0, minf=4097 00:19:26.052 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:19:26.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.052 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.052 issued rwts: total=1115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.052 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.052 job7: (groupid=0, jobs=1): err= 0: pid=77852: Sun Nov 17 08:19:29 2024 00:19:26.052 read: IOPS=425, BW=106MiB/s (112MB/s)(1072MiB/10071msec) 00:19:26.052 slat (usec): min=16, max=50732, avg=2301.14, stdev=5571.28 00:19:26.052 clat (msec): min=12, max=233, avg=147.54, stdev=22.83 00:19:26.052 lat (msec): min=13, max=234, avg=149.84, stdev=23.03 00:19:26.052 clat percentiles (msec): 00:19:26.052 | 1.00th=[ 55], 5.00th=[ 116], 10.00th=[ 128], 20.00th=[ 136], 00:19:26.052 | 30.00th=[ 142], 40.00th=[ 146], 50.00th=[ 150], 60.00th=[ 153], 00:19:26.052 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 176], 00:19:26.052 | 99.00th=[ 199], 99.50th=[ 205], 99.90th=[ 234], 99.95th=[ 234], 00:19:26.052 | 99.99th=[ 234] 00:19:26.052 bw ( KiB/s): min=79872, max=126976, per=14.76%, avg=108127.20, stdev=7978.33, samples=20 00:19:26.052 iops : min= 312, max= 496, avg=422.35, stdev=31.16, samples=20 00:19:26.052 lat (msec) : 20=0.19%, 50=0.56%, 100=2.49%, 250=96.76% 00:19:26.052 cpu : usr=0.28%, sys=2.52%, ctx=893, majf=0, minf=4097 00:19:26.052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:26.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.052 issued rwts: total=4289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.052 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.052 job8: (groupid=0, jobs=1): err= 0: pid=77853: Sun Nov 17 08:19:29 2024 00:19:26.052 read: IOPS=109, BW=27.3MiB/s (28.6MB/s)(278MiB/10172msec) 00:19:26.052 slat (usec): min=15, max=349956, avg=9013.18, stdev=25638.50 00:19:26.052 clat (msec): min=59, max=870, avg=575.81, stdev=164.75 00:19:26.052 lat (msec): min=60, max=870, avg=584.82, stdev=166.89 00:19:26.052 clat percentiles (msec): 00:19:26.052 | 1.00th=[ 64], 5.00th=[ 122], 10.00th=[ 313], 20.00th=[ 518], 00:19:26.052 | 30.00th=[ 550], 40.00th=[ 584], 50.00th=[ 617], 60.00th=[ 659], 00:19:26.052 | 70.00th=[ 676], 80.00th=[ 693], 90.00th=[ 718], 95.00th=[ 751], 00:19:26.052 | 99.00th=[ 776], 99.50th=[ 818], 99.90th=[ 827], 99.95th=[ 869], 00:19:26.052 | 99.99th=[ 869] 00:19:26.052 bw ( KiB/s): min=12288, max=63488, per=3.66%, avg=26795.10, stdev=9677.80, samples=20 00:19:26.052 iops : min= 48, max= 248, avg=104.55, stdev=37.81, samples=20 00:19:26.052 lat (msec) : 100=4.14%, 250=1.89%, 500=11.08%, 750=78.74%, 1000=4.14% 00:19:26.052 cpu : usr=0.05%, sys=0.81%, ctx=204, majf=0, minf=4097 00:19:26.052 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:19:26.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.052 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.052 issued rwts: total=1110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.052 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.052 job9: (groupid=0, jobs=1): err= 0: pid=77854: Sun Nov 17 08:19:29 2024 00:19:26.052 read: IOPS=917, BW=229MiB/s (241MB/s)(2301MiB/10025msec) 00:19:26.052 slat (usec): min=22, max=31540, avg=1033.81, stdev=2289.47 00:19:26.052 clat (msec): min=23, max=233, avg=68.48, stdev=17.48 00:19:26.052 lat (msec): min=26, max=260, avg=69.52, stdev=17.43 00:19:26.052 clat percentiles (msec): 00:19:26.052 | 1.00th=[ 51], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 62], 00:19:26.052 | 30.00th=[ 64], 40.00th=[ 66], 50.00th=[ 67], 60.00th=[ 68], 00:19:26.052 | 70.00th=[ 69], 80.00th=[ 71], 90.00th=[ 73], 95.00th=[ 77], 00:19:26.052 | 99.00th=[ 167], 99.50th=[ 209], 99.90th=[ 228], 99.95th=[ 230], 00:19:26.052 | 99.99th=[ 234] 00:19:26.052 bw ( KiB/s): min=79360, max=256000, per=31.92%, avg=233884.30, stdev=37447.10, samples=20 00:19:26.052 iops : min= 310, max= 1000, avg=913.60, stdev=146.28, samples=20 00:19:26.052 lat (msec) : 50=0.98%, 100=96.34%, 250=2.68% 00:19:26.052 cpu : usr=0.65%, sys=4.95%, ctx=1825, majf=0, minf=4098 00:19:26.052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:26.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.052 issued rwts: total=9202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.052 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.052 job10: (groupid=0, jobs=1): err= 0: pid=77855: Sun Nov 17 08:19:29 2024 00:19:26.052 read: IOPS=187, BW=46.9MiB/s (49.2MB/s)(475MiB/10124msec) 00:19:26.052 slat (usec): min=33, max=154713, avg=5274.28, stdev=13367.73 00:19:26.052 clat (msec): min=83, max=462, avg=334.70, stdev=40.77 00:19:26.052 lat (msec): min=85, max=522, avg=339.98, stdev=41.19 00:19:26.052 clat percentiles (msec): 00:19:26.052 | 1.00th=[ 153], 5.00th=[ 279], 10.00th=[ 296], 20.00th=[ 313], 00:19:26.052 | 30.00th=[ 321], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:19:26.052 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 372], 95.00th=[ 388], 00:19:26.052 | 99.00th=[ 414], 99.50th=[ 447], 99.90th=[ 464], 99.95th=[ 464], 00:19:26.052 | 99.99th=[ 464] 00:19:26.052 bw ( KiB/s): min=32320, max=54674, per=6.41%, avg=46989.85, stdev=4632.49, samples=20 00:19:26.052 iops : min= 126, max= 213, avg=183.45, stdev=18.09, samples=20 00:19:26.052 lat (msec) : 100=0.58%, 250=1.58%, 500=97.84% 00:19:26.052 cpu : usr=0.18%, sys=1.20%, ctx=382, majf=0, minf=4097 00:19:26.052 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:19:26.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.052 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.052 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.052 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.052 00:19:26.052 Run status group 0 (all jobs): 00:19:26.052 READ: bw=716MiB/s (750MB/s), 27.2MiB/s-229MiB/s (28.5MB/s-241MB/s), io=7284MiB (7637MB), run=10025-10178msec 00:19:26.052 00:19:26.052 Disk stats (read/write): 00:19:26.052 nvme0n1: ios=3651/0, merge=0/0, ticks=1225742/0, in_queue=1225742, util=97.95% 00:19:26.052 nvme10n1: ios=2104/0, merge=0/0, ticks=1207338/0, in_queue=1207338, util=98.19% 00:19:26.052 nvme1n1: ios=3608/0, merge=0/0, ticks=1226988/0, in_queue=1226988, util=98.04% 00:19:26.052 nvme2n1: ios=2107/0, merge=0/0, ticks=1216942/0, in_queue=1216942, util=98.17% 00:19:26.052 nvme3n1: ios=2130/0, merge=0/0, ticks=1201010/0, in_queue=1201010, util=98.29% 00:19:26.052 nvme4n1: ios=8770/0, merge=0/0, ticks=1235481/0, in_queue=1235481, util=98.44% 00:19:26.052 nvme5n1: ios=2113/0, merge=0/0, ticks=1207673/0, in_queue=1207673, util=98.45% 00:19:26.052 nvme6n1: ios=8488/0, merge=0/0, ticks=1238321/0, in_queue=1238321, util=98.56% 00:19:26.052 nvme7n1: ios=2106/0, merge=0/0, ticks=1210839/0, in_queue=1210839, util=98.79% 00:19:26.052 nvme8n1: ios=17878/0, merge=0/0, ticks=1211209/0, in_queue=1211209, util=98.90% 00:19:26.052 nvme9n1: ios=3693/0, merge=0/0, ticks=1230090/0, in_queue=1230090, util=98.89% 00:19:26.052 08:19:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:26.052 [global] 00:19:26.052 thread=1 00:19:26.052 invalidate=1 00:19:26.052 rw=randwrite 00:19:26.052 time_based=1 00:19:26.052 runtime=10 00:19:26.052 ioengine=libaio 00:19:26.052 direct=1 00:19:26.052 bs=262144 00:19:26.052 iodepth=64 00:19:26.052 norandommap=1 00:19:26.052 numjobs=1 00:19:26.052 00:19:26.052 [job0] 00:19:26.052 filename=/dev/nvme0n1 00:19:26.052 [job1] 00:19:26.052 filename=/dev/nvme10n1 00:19:26.052 [job2] 00:19:26.052 filename=/dev/nvme1n1 00:19:26.052 [job3] 00:19:26.052 filename=/dev/nvme2n1 00:19:26.052 [job4] 00:19:26.052 filename=/dev/nvme3n1 00:19:26.052 [job5] 00:19:26.052 filename=/dev/nvme4n1 00:19:26.052 [job6] 00:19:26.052 filename=/dev/nvme5n1 00:19:26.052 [job7] 00:19:26.052 filename=/dev/nvme6n1 00:19:26.052 [job8] 00:19:26.053 filename=/dev/nvme7n1 00:19:26.053 [job9] 00:19:26.053 filename=/dev/nvme8n1 00:19:26.053 [job10] 00:19:26.053 filename=/dev/nvme9n1 00:19:26.053 Could not set queue depth (nvme0n1) 00:19:26.053 Could not set queue depth (nvme10n1) 00:19:26.053 Could not set queue depth (nvme1n1) 00:19:26.053 Could not set queue depth (nvme2n1) 00:19:26.053 Could not set queue depth (nvme3n1) 00:19:26.053 Could not set queue depth (nvme4n1) 00:19:26.053 Could not set queue depth (nvme5n1) 00:19:26.053 Could not set queue depth (nvme6n1) 00:19:26.053 Could not set queue depth (nvme7n1) 00:19:26.053 Could not set queue depth (nvme8n1) 00:19:26.053 Could not set queue depth (nvme9n1) 00:19:26.053 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.053 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.053 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.053 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.053 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.053 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.053 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.053 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.053 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.053 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.053 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.053 fio-3.35 00:19:26.053 Starting 11 threads 00:19:36.046 00:19:36.046 job0: (groupid=0, jobs=1): err= 0: pid=78056: Sun Nov 17 08:19:40 2024 00:19:36.046 write: IOPS=310, BW=77.6MiB/s (81.4MB/s)(789MiB/10169msec); 0 zone resets 00:19:36.046 slat (usec): min=35, max=40606, avg=3125.67, stdev=5457.57 00:19:36.046 clat (msec): min=46, max=364, avg=202.95, stdev=19.41 00:19:36.046 lat (msec): min=46, max=364, avg=206.08, stdev=18.99 00:19:36.046 clat percentiles (msec): 00:19:36.046 | 1.00th=[ 129], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 197], 00:19:36.046 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:19:36.046 | 70.00th=[ 207], 80.00th=[ 209], 90.00th=[ 211], 95.00th=[ 213], 00:19:36.046 | 99.00th=[ 268], 99.50th=[ 313], 99.90th=[ 351], 99.95th=[ 363], 00:19:36.046 | 99.99th=[ 363] 00:19:36.046 bw ( KiB/s): min=69632, max=81920, per=11.53%, avg=79140.80, stdev=2575.31, samples=20 00:19:36.046 iops : min= 272, max= 320, avg=309.05, stdev=10.04, samples=20 00:19:36.046 lat (msec) : 50=0.13%, 100=0.54%, 250=97.81%, 500=1.52% 00:19:36.046 cpu : usr=1.20%, sys=1.09%, ctx=3636, majf=0, minf=1 00:19:36.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:36.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.046 issued rwts: total=0,3156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.046 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.046 job1: (groupid=0, jobs=1): err= 0: pid=78057: Sun Nov 17 08:19:40 2024 00:19:36.046 write: IOPS=217, BW=54.4MiB/s (57.0MB/s)(557MiB/10244msec); 0 zone resets 00:19:36.046 slat (usec): min=24, max=35133, avg=4488.62, stdev=7898.69 00:19:36.046 clat (msec): min=34, max=536, avg=289.45, stdev=44.49 00:19:36.046 lat (msec): min=34, max=536, avg=293.94, stdev=44.54 00:19:36.046 clat percentiles (msec): 00:19:36.046 | 1.00th=[ 91], 5.00th=[ 203], 10.00th=[ 271], 20.00th=[ 284], 00:19:36.046 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 300], 60.00th=[ 300], 00:19:36.046 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 309], 95.00th=[ 313], 00:19:36.046 | 99.00th=[ 418], 99.50th=[ 477], 99.90th=[ 514], 99.95th=[ 535], 00:19:36.046 | 99.99th=[ 535] 00:19:36.046 bw ( KiB/s): min=51200, max=73728, per=8.07%, avg=55407.30, stdev=4662.57, samples=20 00:19:36.046 iops : min= 200, max= 288, avg=216.35, stdev=18.22, samples=20 00:19:36.046 lat (msec) : 50=0.18%, 100=0.94%, 250=5.02%, 500=93.58%, 750=0.27% 00:19:36.046 cpu : usr=0.76%, sys=0.67%, ctx=1286, majf=0, minf=1 00:19:36.046 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:19:36.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.046 issued rwts: total=0,2229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.046 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.046 job2: (groupid=0, jobs=1): err= 0: pid=78058: Sun Nov 17 08:19:40 2024 00:19:36.046 write: IOPS=223, BW=55.8MiB/s (58.5MB/s)(569MiB/10207msec); 0 zone resets 00:19:36.046 slat (usec): min=34, max=26211, avg=4288.71, stdev=7586.14 00:19:36.046 clat (msec): min=25, max=506, avg=282.43, stdev=43.08 00:19:36.046 lat (msec): min=25, max=506, avg=286.72, stdev=42.96 00:19:36.046 clat percentiles (msec): 00:19:36.046 | 1.00th=[ 62], 5.00th=[ 259], 10.00th=[ 268], 20.00th=[ 275], 00:19:36.046 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 288], 00:19:36.046 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 305], 00:19:36.046 | 99.00th=[ 393], 99.50th=[ 447], 99.90th=[ 485], 99.95th=[ 506], 00:19:36.046 | 99.99th=[ 506] 00:19:36.046 bw ( KiB/s): min=53760, max=59904, per=8.25%, avg=56661.55, stdev=1669.76, samples=20 00:19:36.046 iops : min= 210, max= 234, avg=221.25, stdev= 6.57, samples=20 00:19:36.046 lat (msec) : 50=0.70%, 100=1.45%, 250=1.93%, 500=95.83%, 750=0.09% 00:19:36.046 cpu : usr=0.61%, sys=1.04%, ctx=3240, majf=0, minf=1 00:19:36.046 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:19:36.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.046 issued rwts: total=0,2277,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.046 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.046 job3: (groupid=0, jobs=1): err= 0: pid=78067: Sun Nov 17 08:19:40 2024 00:19:36.046 write: IOPS=225, BW=56.4MiB/s (59.1MB/s)(575MiB/10202msec); 0 zone resets 00:19:36.046 slat (usec): min=19, max=24063, avg=4222.67, stdev=7551.20 00:19:36.046 clat (msec): min=27, max=502, avg=279.38, stdev=40.59 00:19:36.046 lat (msec): min=27, max=502, avg=283.61, stdev=40.65 00:19:36.046 clat percentiles (msec): 00:19:36.046 | 1.00th=[ 79], 5.00th=[ 239], 10.00th=[ 264], 20.00th=[ 275], 00:19:36.046 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 288], 00:19:36.046 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 300], 00:19:36.046 | 99.00th=[ 388], 99.50th=[ 443], 99.90th=[ 481], 99.95th=[ 502], 00:19:36.046 | 99.99th=[ 502] 00:19:36.046 bw ( KiB/s): min=53760, max=72192, per=8.34%, avg=57275.95, stdev=3753.72, samples=20 00:19:36.046 iops : min= 210, max= 282, avg=223.65, stdev=14.70, samples=20 00:19:36.046 lat (msec) : 50=0.35%, 100=1.09%, 250=4.43%, 500=94.05%, 750=0.09% 00:19:36.046 cpu : usr=0.67%, sys=1.00%, ctx=1770, majf=0, minf=1 00:19:36.046 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:19:36.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.046 issued rwts: total=0,2301,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.046 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.046 job4: (groupid=0, jobs=1): err= 0: pid=78071: Sun Nov 17 08:19:40 2024 00:19:36.046 write: IOPS=216, BW=54.2MiB/s (56.8MB/s)(555MiB/10236msec); 0 zone resets 00:19:36.046 slat (usec): min=25, max=52037, avg=4467.57, stdev=8064.55 00:19:36.046 clat (msec): min=26, max=532, avg=290.70, stdev=53.45 00:19:36.046 lat (msec): min=26, max=533, avg=295.17, stdev=53.79 00:19:36.046 clat percentiles (msec): 00:19:36.046 | 1.00th=[ 74], 5.00th=[ 153], 10.00th=[ 275], 20.00th=[ 288], 00:19:36.046 | 30.00th=[ 296], 40.00th=[ 300], 50.00th=[ 305], 60.00th=[ 305], 00:19:36.046 | 70.00th=[ 309], 80.00th=[ 309], 90.00th=[ 317], 95.00th=[ 321], 00:19:36.046 | 99.00th=[ 418], 99.50th=[ 472], 99.90th=[ 514], 99.95th=[ 535], 00:19:36.046 | 99.99th=[ 535] 00:19:36.046 bw ( KiB/s): min=51097, max=87040, per=8.03%, avg=55141.05, stdev=7608.66, samples=20 00:19:36.046 iops : min= 199, max= 340, avg=215.25, stdev=29.75, samples=20 00:19:36.046 lat (msec) : 50=0.54%, 100=1.31%, 250=6.04%, 500=91.84%, 750=0.27% 00:19:36.046 cpu : usr=0.72%, sys=0.94%, ctx=2273, majf=0, minf=1 00:19:36.046 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:19:36.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.046 issued rwts: total=0,2218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.046 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.046 job5: (groupid=0, jobs=1): err= 0: pid=78072: Sun Nov 17 08:19:40 2024 00:19:36.046 write: IOPS=217, BW=54.4MiB/s (57.0MB/s)(557MiB/10240msec); 0 zone resets 00:19:36.046 slat (usec): min=27, max=36601, avg=4312.02, stdev=7921.45 00:19:36.046 clat (msec): min=7, max=534, avg=289.70, stdev=55.18 00:19:36.046 lat (msec): min=7, max=534, avg=294.01, stdev=55.79 00:19:36.046 clat percentiles (msec): 00:19:36.046 | 1.00th=[ 31], 5.00th=[ 165], 10.00th=[ 264], 20.00th=[ 288], 00:19:36.046 | 30.00th=[ 292], 40.00th=[ 300], 50.00th=[ 305], 60.00th=[ 305], 00:19:36.046 | 70.00th=[ 309], 80.00th=[ 309], 90.00th=[ 317], 95.00th=[ 321], 00:19:36.046 | 99.00th=[ 418], 99.50th=[ 472], 99.90th=[ 514], 99.95th=[ 535], 00:19:36.046 | 99.99th=[ 535] 00:19:36.046 bw ( KiB/s): min=51097, max=91136, per=8.06%, avg=55346.45, stdev=8606.24, samples=20 00:19:36.046 iops : min= 199, max= 356, avg=216.05, stdev=33.67, samples=20 00:19:36.046 lat (msec) : 10=0.18%, 20=0.45%, 50=0.76%, 100=0.90%, 250=6.38% 00:19:36.046 lat (msec) : 500=91.06%, 750=0.27% 00:19:36.046 cpu : usr=0.72%, sys=0.95%, ctx=2755, majf=0, minf=1 00:19:36.046 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:19:36.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.047 issued rwts: total=0,2227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.047 job6: (groupid=0, jobs=1): err= 0: pid=78073: Sun Nov 17 08:19:40 2024 00:19:36.047 write: IOPS=308, BW=77.1MiB/s (80.8MB/s)(783MiB/10155msec); 0 zone resets 00:19:36.047 slat (usec): min=17, max=132372, avg=3188.43, stdev=5891.60 00:19:36.047 clat (msec): min=134, max=351, avg=204.25, stdev=17.23 00:19:36.047 lat (msec): min=134, max=351, avg=207.44, stdev=16.41 00:19:36.047 clat percentiles (msec): 00:19:36.047 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 197], 00:19:36.047 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 205], 00:19:36.047 | 70.00th=[ 207], 80.00th=[ 209], 90.00th=[ 211], 95.00th=[ 213], 00:19:36.047 | 99.00th=[ 313], 99.50th=[ 330], 99.90th=[ 351], 99.95th=[ 351], 00:19:36.047 | 99.99th=[ 351] 00:19:36.047 bw ( KiB/s): min=57344, max=81920, per=11.44%, avg=78550.40, stdev=5109.13, samples=20 00:19:36.047 iops : min= 224, max= 320, avg=306.80, stdev=19.95, samples=20 00:19:36.047 lat (msec) : 250=98.05%, 500=1.95% 00:19:36.047 cpu : usr=1.04%, sys=1.23%, ctx=3839, majf=0, minf=1 00:19:36.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:36.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.047 issued rwts: total=0,3132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.047 job7: (groupid=0, jobs=1): err= 0: pid=78074: Sun Nov 17 08:19:40 2024 00:19:36.047 write: IOPS=224, BW=56.1MiB/s (58.8MB/s)(573MiB/10215msec); 0 zone resets 00:19:36.047 slat (usec): min=45, max=27880, avg=4211.40, stdev=7552.81 00:19:36.047 clat (msec): min=7, max=516, avg=280.67, stdev=42.98 00:19:36.047 lat (msec): min=7, max=516, avg=284.89, stdev=43.01 00:19:36.047 clat percentiles (msec): 00:19:36.047 | 1.00th=[ 66], 5.00th=[ 234], 10.00th=[ 266], 20.00th=[ 275], 00:19:36.047 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 288], 00:19:36.047 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 305], 00:19:36.047 | 99.00th=[ 401], 99.50th=[ 460], 99.90th=[ 498], 99.95th=[ 518], 00:19:36.047 | 99.99th=[ 518] 00:19:36.047 bw ( KiB/s): min=51712, max=69771, per=8.31%, avg=57077.70, stdev=3326.13, samples=20 00:19:36.047 iops : min= 202, max= 272, avg=222.85, stdev=12.88, samples=20 00:19:36.047 lat (msec) : 10=0.09%, 20=0.13%, 50=0.35%, 100=1.00%, 250=3.62% 00:19:36.047 lat (msec) : 500=94.72%, 750=0.09% 00:19:36.047 cpu : usr=0.71%, sys=0.99%, ctx=2855, majf=0, minf=1 00:19:36.047 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:19:36.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.047 issued rwts: total=0,2293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.047 job8: (groupid=0, jobs=1): err= 0: pid=78075: Sun Nov 17 08:19:40 2024 00:19:36.047 write: IOPS=217, BW=54.3MiB/s (56.9MB/s)(556MiB/10229msec); 0 zone resets 00:19:36.047 slat (usec): min=22, max=116285, avg=4349.95, stdev=8201.10 00:19:36.047 clat (msec): min=4, max=524, avg=290.13, stdev=51.75 00:19:36.047 lat (msec): min=6, max=524, avg=294.48, stdev=51.79 00:19:36.047 clat percentiles (msec): 00:19:36.047 | 1.00th=[ 31], 5.00th=[ 201], 10.00th=[ 275], 20.00th=[ 284], 00:19:36.047 | 30.00th=[ 288], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 305], 00:19:36.047 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 309], 95.00th=[ 317], 00:19:36.047 | 99.00th=[ 443], 99.50th=[ 464], 99.90th=[ 506], 99.95th=[ 527], 00:19:36.047 | 99.99th=[ 527] 00:19:36.047 bw ( KiB/s): min=51097, max=76288, per=8.04%, avg=55228.70, stdev=5313.43, samples=20 00:19:36.047 iops : min= 199, max= 298, avg=215.65, stdev=20.79, samples=20 00:19:36.047 lat (msec) : 10=0.27%, 20=0.50%, 50=1.26%, 250=4.28%, 500=93.43% 00:19:36.047 lat (msec) : 750=0.27% 00:19:36.047 cpu : usr=0.76%, sys=0.87%, ctx=1271, majf=0, minf=1 00:19:36.047 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:19:36.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.047 issued rwts: total=0,2222,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.047 job9: (groupid=0, jobs=1): err= 0: pid=78076: Sun Nov 17 08:19:40 2024 00:19:36.047 write: IOPS=224, BW=56.0MiB/s (58.8MB/s)(573MiB/10214msec); 0 zone resets 00:19:36.047 slat (usec): min=34, max=23155, avg=4203.25, stdev=7568.67 00:19:36.047 clat (msec): min=8, max=511, avg=280.97, stdev=41.43 00:19:36.047 lat (msec): min=8, max=511, avg=285.18, stdev=41.50 00:19:36.047 clat percentiles (msec): 00:19:36.047 | 1.00th=[ 36], 5.00th=[ 255], 10.00th=[ 266], 20.00th=[ 275], 00:19:36.047 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 288], 00:19:36.047 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 300], 00:19:36.047 | 99.00th=[ 397], 99.50th=[ 451], 99.90th=[ 493], 99.95th=[ 510], 00:19:36.047 | 99.99th=[ 510] 00:19:36.047 bw ( KiB/s): min=53760, max=67584, per=8.30%, avg=56982.95, stdev=2881.14, samples=20 00:19:36.047 iops : min= 210, max= 264, avg=222.45, stdev=11.30, samples=20 00:19:36.047 lat (msec) : 10=0.26%, 20=0.52%, 50=0.26%, 100=0.31%, 250=3.32% 00:19:36.047 lat (msec) : 500=95.24%, 750=0.09% 00:19:36.047 cpu : usr=0.88%, sys=0.72%, ctx=1913, majf=0, minf=2 00:19:36.047 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.2% 00:19:36.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.047 issued rwts: total=0,2290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.047 job10: (groupid=0, jobs=1): err= 0: pid=78077: Sun Nov 17 08:19:40 2024 00:19:36.047 write: IOPS=307, BW=76.8MiB/s (80.6MB/s)(782MiB/10171msec); 0 zone resets 00:19:36.047 slat (usec): min=31, max=128214, avg=3193.67, stdev=5872.50 00:19:36.047 clat (msec): min=48, max=367, avg=204.82, stdev=22.81 00:19:36.047 lat (msec): min=48, max=367, avg=208.01, stdev=22.28 00:19:36.047 clat percentiles (msec): 00:19:36.047 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 197], 00:19:36.047 | 30.00th=[ 201], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:19:36.047 | 70.00th=[ 207], 80.00th=[ 209], 90.00th=[ 211], 95.00th=[ 213], 00:19:36.047 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 355], 99.95th=[ 368], 00:19:36.047 | 99.99th=[ 368] 00:19:36.047 bw ( KiB/s): min=53354, max=81920, per=11.42%, avg=78394.10, stdev=6018.12, samples=20 00:19:36.047 iops : min= 208, max= 320, avg=306.15, stdev=23.59, samples=20 00:19:36.047 lat (msec) : 50=0.13%, 100=0.38%, 250=96.64%, 500=2.85% 00:19:36.047 cpu : usr=1.07%, sys=1.24%, ctx=4063, majf=0, minf=1 00:19:36.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:36.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.047 issued rwts: total=0,3126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.047 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.047 00:19:36.047 Run status group 0 (all jobs): 00:19:36.047 WRITE: bw=670MiB/s (703MB/s), 54.2MiB/s-77.6MiB/s (56.8MB/s-81.4MB/s), io=6868MiB (7201MB), run=10155-10244msec 00:19:36.047 00:19:36.047 Disk stats (read/write): 00:19:36.047 nvme0n1: ios=50/6177, merge=0/0, ticks=56/1209220, in_queue=1209276, util=97.85% 00:19:36.047 nvme10n1: ios=49/4455, merge=0/0, ticks=88/1239779, in_queue=1239867, util=98.28% 00:19:36.047 nvme1n1: ios=49/4423, merge=0/0, ticks=55/1203307, in_queue=1203362, util=98.01% 00:19:36.047 nvme2n1: ios=49/4469, merge=0/0, ticks=68/1203769, in_queue=1203837, util=98.22% 00:19:36.047 nvme3n1: ios=45/4431, merge=0/0, ticks=123/1238576, in_queue=1238699, util=98.41% 00:19:36.047 nvme4n1: ios=21/4450, merge=0/0, ticks=29/1239913, in_queue=1239942, util=98.11% 00:19:36.047 nvme5n1: ios=0/6113, merge=0/0, ticks=0/1206834, in_queue=1206834, util=97.94% 00:19:36.047 nvme6n1: ios=15/4461, merge=0/0, ticks=9/1205740, in_queue=1205749, util=98.31% 00:19:36.047 nvme7n1: ios=0/4435, merge=0/0, ticks=0/1239286, in_queue=1239286, util=98.44% 00:19:36.047 nvme8n1: ios=0/4452, merge=0/0, ticks=0/1205240, in_queue=1205240, util=98.65% 00:19:36.047 nvme9n1: ios=0/6119, merge=0/0, ticks=0/1209378, in_queue=1209378, util=98.71% 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:36.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:36.047 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:36.047 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:36.048 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:36.048 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:36.048 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:36.048 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.048 08:19:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:36.048 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:36.048 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:36.048 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:36.048 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:36.048 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:36.308 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:19:36.308 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:36.309 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:36.309 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.309 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.309 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.309 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.309 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:36.309 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:36.309 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:36.309 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:36.309 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:36.309 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:19:36.309 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:19:36.309 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:36.569 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:36.569 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:36.569 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:36.828 rmmod nvme_tcp 00:19:36.828 rmmod nvme_fabrics 00:19:36.828 rmmod nvme_keyring 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 77373 ']' 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 77373 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 77373 ']' 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 77373 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.828 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77373 00:19:36.828 killing process with pid 77373 00:19:36.829 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:36.829 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:36.829 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77373' 00:19:36.829 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 77373 00:19:36.829 08:19:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 77373 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:19:41.027 00:19:41.027 real 0m54.631s 00:19:41.027 user 3m10.941s 00:19:41.027 sys 0m25.489s 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 ************************************ 00:19:41.027 END TEST nvmf_multiconnection 00:19:41.027 ************************************ 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:41.027 ************************************ 00:19:41.027 START TEST nvmf_initiator_timeout 00:19:41.027 ************************************ 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:41.027 * Looking for test storage... 00:19:41.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.027 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:41.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.028 --rc genhtml_branch_coverage=1 00:19:41.028 --rc genhtml_function_coverage=1 00:19:41.028 --rc genhtml_legend=1 00:19:41.028 --rc geninfo_all_blocks=1 00:19:41.028 --rc geninfo_unexecuted_blocks=1 00:19:41.028 00:19:41.028 ' 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:41.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.028 --rc genhtml_branch_coverage=1 00:19:41.028 --rc genhtml_function_coverage=1 00:19:41.028 --rc genhtml_legend=1 00:19:41.028 --rc geninfo_all_blocks=1 00:19:41.028 --rc geninfo_unexecuted_blocks=1 00:19:41.028 00:19:41.028 ' 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:41.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.028 --rc genhtml_branch_coverage=1 00:19:41.028 --rc genhtml_function_coverage=1 00:19:41.028 --rc genhtml_legend=1 00:19:41.028 --rc geninfo_all_blocks=1 00:19:41.028 --rc geninfo_unexecuted_blocks=1 00:19:41.028 00:19:41.028 ' 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:41.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.028 --rc genhtml_branch_coverage=1 00:19:41.028 --rc genhtml_function_coverage=1 00:19:41.028 --rc genhtml_legend=1 00:19:41.028 --rc geninfo_all_blocks=1 00:19:41.028 --rc geninfo_unexecuted_blocks=1 00:19:41.028 00:19:41.028 ' 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.028 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.028 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:41.029 Cannot find device "nvmf_init_br" 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:41.029 Cannot find device "nvmf_init_br2" 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:41.029 Cannot find device "nvmf_tgt_br" 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.029 Cannot find device "nvmf_tgt_br2" 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:41.029 Cannot find device "nvmf_init_br" 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:19:41.029 08:19:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:41.029 Cannot find device "nvmf_init_br2" 00:19:41.029 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:19:41.029 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:41.029 Cannot find device "nvmf_tgt_br" 00:19:41.029 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:19:41.029 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:41.289 Cannot find device "nvmf_tgt_br2" 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:41.289 Cannot find device "nvmf_br" 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:41.289 Cannot find device "nvmf_init_if" 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:41.289 Cannot find device "nvmf_init_if2" 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:41.289 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:41.289 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:41.289 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:41.549 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:41.549 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:19:41.549 00:19:41.549 --- 10.0.0.3 ping statistics --- 00:19:41.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.549 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:41.549 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:41.549 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:19:41.549 00:19:41.549 --- 10.0.0.4 ping statistics --- 00:19:41.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.549 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:41.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:41.549 00:19:41.549 --- 10.0.0.1 ping statistics --- 00:19:41.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.549 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:41.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:19:41.549 00:19:41.549 --- 10.0.0.2 ping statistics --- 00:19:41.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.549 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=78551 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 78551 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 78551 ']' 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.549 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:41.549 [2024-11-17 08:19:46.537171] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:41.549 [2024-11-17 08:19:46.537274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.809 [2024-11-17 08:19:46.720590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.068 [2024-11-17 08:19:46.862232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.068 [2024-11-17 08:19:46.862541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.068 [2024-11-17 08:19:46.862572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.068 [2024-11-17 08:19:46.862584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.068 [2024-11-17 08:19:46.862598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.068 [2024-11-17 08:19:46.865147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.068 [2024-11-17 08:19:46.865307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.068 [2024-11-17 08:19:46.866202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.068 [2024-11-17 08:19:46.866239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.327 [2024-11-17 08:19:47.113356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:42.586 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.586 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:42.586 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.586 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.586 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.586 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.586 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:42.586 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:42.586 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.586 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.586 Malloc0 00:19:42.586 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.586 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.587 Delay0 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.587 [2024-11-17 08:19:47.522176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:42.587 [2024-11-17 08:19:47.562552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.587 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:19:42.846 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:42.846 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:19:42.846 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:42.846 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:42.846 08:19:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:19:44.753 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:44.753 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:44.753 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:44.753 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:44.753 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:44.753 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:19:44.753 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=78612 00:19:44.753 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:44.753 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:45.012 [global] 00:19:45.012 thread=1 00:19:45.012 invalidate=1 00:19:45.012 rw=write 00:19:45.012 time_based=1 00:19:45.012 runtime=60 00:19:45.012 ioengine=libaio 00:19:45.012 direct=1 00:19:45.012 bs=4096 00:19:45.012 iodepth=1 00:19:45.012 norandommap=0 00:19:45.012 numjobs=1 00:19:45.012 00:19:45.012 verify_dump=1 00:19:45.012 verify_backlog=512 00:19:45.012 verify_state_save=0 00:19:45.012 do_verify=1 00:19:45.012 verify=crc32c-intel 00:19:45.012 [job0] 00:19:45.012 filename=/dev/nvme0n1 00:19:45.012 Could not set queue depth (nvme0n1) 00:19:45.012 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.012 fio-3.35 00:19:45.012 Starting 1 thread 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:48.305 true 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:48.305 true 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:48.305 true 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:48.305 true 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.305 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:50.906 true 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:50.906 true 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:50.906 true 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.906 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:50.907 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.907 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:50.907 true 00:19:50.907 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.907 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:50.907 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 78612 00:20:47.142 00:20:47.142 job0: (groupid=0, jobs=1): err= 0: pid=78638: Sun Nov 17 08:20:50 2024 00:20:47.142 read: IOPS=702, BW=2811KiB/s (2879kB/s)(165MiB/60000msec) 00:20:47.142 slat (usec): min=7, max=10934, avg= 9.26, stdev=62.38 00:20:47.142 clat (usec): min=157, max=40493k, avg=1212.03, stdev=197190.03 00:20:47.142 lat (usec): min=167, max=40493k, avg=1221.29, stdev=197190.03 00:20:47.142 clat percentiles (usec): 00:20:47.142 | 1.00th=[ 174], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 219], 00:20:47.142 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 262], 00:20:47.142 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 318], 00:20:47.142 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 433], 99.95th=[ 553], 00:20:47.142 | 99.99th=[ 676] 00:20:47.142 write: IOPS=708, BW=2833KiB/s (2901kB/s)(166MiB/60000msec); 0 zone resets 00:20:47.142 slat (usec): min=9, max=653, avg=13.59, stdev= 5.80 00:20:47.142 clat (usec): min=111, max=661, avg=184.42, stdev=29.91 00:20:47.142 lat (usec): min=126, max=888, avg=198.01, stdev=31.07 00:20:47.142 clat percentiles (usec): 00:20:47.142 | 1.00th=[ 128], 5.00th=[ 143], 10.00th=[ 151], 20.00th=[ 161], 00:20:47.142 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 190], 00:20:47.142 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 235], 00:20:47.142 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 359], 99.95th=[ 482], 00:20:47.142 | 99.99th=[ 619] 00:20:47.142 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=8507.08, stdev=1382.91, samples=39 00:20:47.142 iops : min= 1024, max= 3072, avg=2126.77, stdev=345.73, samples=39 00:20:47.143 lat (usec) : 250=73.44%, 500=26.51%, 750=0.05%, 1000=0.01% 00:20:47.143 lat (msec) : >=2000=0.01% 00:20:47.143 cpu : usr=0.33%, sys=1.31%, ctx=84669, majf=0, minf=5 00:20:47.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.143 issued rwts: total=42168,42496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:47.143 00:20:47.143 Run status group 0 (all jobs): 00:20:47.143 READ: bw=2811KiB/s (2879kB/s), 2811KiB/s-2811KiB/s (2879kB/s-2879kB/s), io=165MiB (173MB), run=60000-60000msec 00:20:47.143 WRITE: bw=2833KiB/s (2901kB/s), 2833KiB/s-2833KiB/s (2901kB/s-2901kB/s), io=166MiB (174MB), run=60000-60000msec 00:20:47.143 00:20:47.143 Disk stats (read/write): 00:20:47.143 nvme0n1: ios=42201/42254, merge=0/0, ticks=10849/8018, in_queue=18867, util=99.82% 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:47.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:47.143 nvmf hotplug test: fio successful as expected 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:47.143 rmmod nvme_tcp 00:20:47.143 rmmod nvme_fabrics 00:20:47.143 rmmod nvme_keyring 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 78551 ']' 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 78551 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 78551 ']' 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 78551 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78551 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78551' 00:20:47.143 killing process with pid 78551 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 78551 00:20:47.143 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 78551 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.143 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.143 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:20:47.143 00:20:47.143 real 1m6.448s 00:20:47.143 user 4m2.839s 00:20:47.143 sys 0m16.177s 00:20:47.143 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.143 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:47.143 ************************************ 00:20:47.143 END TEST nvmf_initiator_timeout 00:20:47.143 ************************************ 00:20:47.143 08:20:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:20:47.143 08:20:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:47.143 08:20:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:47.143 08:20:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.143 08:20:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:47.143 ************************************ 00:20:47.143 START TEST nvmf_nsid 00:20:47.143 ************************************ 00:20:47.143 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:47.403 * Looking for test storage... 00:20:47.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.403 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:47.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.404 --rc genhtml_branch_coverage=1 00:20:47.404 --rc genhtml_function_coverage=1 00:20:47.404 --rc genhtml_legend=1 00:20:47.404 --rc geninfo_all_blocks=1 00:20:47.404 --rc geninfo_unexecuted_blocks=1 00:20:47.404 00:20:47.404 ' 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:47.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.404 --rc genhtml_branch_coverage=1 00:20:47.404 --rc genhtml_function_coverage=1 00:20:47.404 --rc genhtml_legend=1 00:20:47.404 --rc geninfo_all_blocks=1 00:20:47.404 --rc geninfo_unexecuted_blocks=1 00:20:47.404 00:20:47.404 ' 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:47.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.404 --rc genhtml_branch_coverage=1 00:20:47.404 --rc genhtml_function_coverage=1 00:20:47.404 --rc genhtml_legend=1 00:20:47.404 --rc geninfo_all_blocks=1 00:20:47.404 --rc geninfo_unexecuted_blocks=1 00:20:47.404 00:20:47.404 ' 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:47.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.404 --rc genhtml_branch_coverage=1 00:20:47.404 --rc genhtml_function_coverage=1 00:20:47.404 --rc genhtml_legend=1 00:20:47.404 --rc geninfo_all_blocks=1 00:20:47.404 --rc geninfo_unexecuted_blocks=1 00:20:47.404 00:20:47.404 ' 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.404 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:47.665 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:47.665 Cannot find device "nvmf_init_br" 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:47.665 Cannot find device "nvmf_init_br2" 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:47.665 Cannot find device "nvmf_tgt_br" 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:47.665 Cannot find device "nvmf_tgt_br2" 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:47.665 Cannot find device "nvmf_init_br" 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:47.665 Cannot find device "nvmf_init_br2" 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:47.665 Cannot find device "nvmf_tgt_br" 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:47.665 Cannot find device "nvmf_tgt_br2" 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:47.665 Cannot find device "nvmf_br" 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:47.665 Cannot find device "nvmf_init_if" 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:47.665 Cannot find device "nvmf_init_if2" 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:47.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:47.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:47.665 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:47.925 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:47.925 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:47.925 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:47.925 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:47.925 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:47.925 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:47.925 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:47.925 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:47.925 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:47.925 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:47.926 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:48.185 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:48.185 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:48.185 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:48.185 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:48.185 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:48.185 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:48.185 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:48.185 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:48.185 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:20:48.185 00:20:48.185 --- 10.0.0.3 ping statistics --- 00:20:48.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.185 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:48.185 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:48.185 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:48.185 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:20:48.185 00:20:48.185 --- 10.0.0.4 ping statistics --- 00:20:48.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.185 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:48.185 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:48.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:20:48.185 00:20:48.185 --- 10.0.0.1 ping statistics --- 00:20:48.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.185 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:48.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:20:48.185 00:20:48.185 --- 10.0.0.2 ping statistics --- 00:20:48.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.185 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=79522 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 79522 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 79522 ']' 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.185 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:48.185 [2024-11-17 08:20:53.178353] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:48.185 [2024-11-17 08:20:53.178690] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.445 [2024-11-17 08:20:53.358236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.704 [2024-11-17 08:20:53.496802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.704 [2024-11-17 08:20:53.496864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.704 [2024-11-17 08:20:53.496880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.704 [2024-11-17 08:20:53.496902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.704 [2024-11-17 08:20:53.496915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.704 [2024-11-17 08:20:53.498265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.963 [2024-11-17 08:20:53.735029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:49.222 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.222 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:49.222 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:49.222 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.222 08:20:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=79554 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=9cf9dd95-137c-47db-95ff-366e56c01562 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=e38d1be1-962c-42aa-a594-39a4861d1f6c 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=131b6c94-504f-4a05-9836-52c230ed75f8 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:49.222 null0 00:20:49.222 null1 00:20:49.222 null2 00:20:49.222 [2024-11-17 08:20:54.106684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.222 [2024-11-17 08:20:54.130911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:49.222 [2024-11-17 08:20:54.143117] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:49.222 [2024-11-17 08:20:54.143344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79554 ] 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 79554 /var/tmp/tgt2.sock 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 79554 ']' 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:49.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.222 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:49.481 [2024-11-17 08:20:54.320913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.481 [2024-11-17 08:20:54.421776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.739 [2024-11-17 08:20:54.658029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:50.308 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.308 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:50.308 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:50.876 [2024-11-17 08:20:55.583642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.876 [2024-11-17 08:20:55.599782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:50.876 nvme0n1 nvme0n2 00:20:50.876 nvme1n1 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:50.876 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 9cf9dd95-137c-47db-95ff-366e56c01562 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9cf9dd95137c47db95ff366e56c01562 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9CF9DD95137C47DB95FF366E56C01562 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 9CF9DD95137C47DB95FF366E56C01562 == \9\C\F\9\D\D\9\5\1\3\7\C\4\7\D\B\9\5\F\F\3\6\6\E\5\6\C\0\1\5\6\2 ]] 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid e38d1be1-962c-42aa-a594-39a4861d1f6c 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:52.254 08:20:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e38d1be1962c42aaa59439a4861d1f6c 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E38D1BE1962C42AAA59439A4861D1F6C 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ E38D1BE1962C42AAA59439A4861D1F6C == \E\3\8\D\1\B\E\1\9\6\2\C\4\2\A\A\A\5\9\4\3\9\A\4\8\6\1\D\1\F\6\C ]] 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 131b6c94-504f-4a05-9836-52c230ed75f8 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=131b6c94504f4a05983652c230ed75f8 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 131B6C94504F4A05983652C230ED75F8 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 131B6C94504F4A05983652C230ED75F8 == \1\3\1\B\6\C\9\4\5\0\4\F\4\A\0\5\9\8\3\6\5\2\C\2\3\0\E\D\7\5\F\8 ]] 00:20:52.254 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 79554 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 79554 ']' 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 79554 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79554 00:20:52.514 killing process with pid 79554 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79554' 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 79554 00:20:52.514 08:20:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 79554 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.050 rmmod nvme_tcp 00:20:55.050 rmmod nvme_fabrics 00:20:55.050 rmmod nvme_keyring 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 79522 ']' 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 79522 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 79522 ']' 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 79522 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79522 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.050 killing process with pid 79522 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79522' 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 79522 00:20:55.050 08:20:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 79522 00:20:55.988 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:55.988 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:55.988 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:55.988 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:55.988 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:55.988 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:55.988 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:55.988 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.988 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:55.988 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:55.988 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:56.248 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.248 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.508 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:20:56.508 00:20:56.508 real 0m9.168s 00:20:56.508 user 0m12.885s 00:20:56.508 sys 0m2.833s 00:20:56.508 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.508 ************************************ 00:20:56.508 END TEST nvmf_nsid 00:20:56.508 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:56.508 ************************************ 00:20:56.508 08:21:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:56.508 ************************************ 00:20:56.508 END TEST nvmf_target_extra 00:20:56.508 ************************************ 00:20:56.508 00:20:56.508 real 7m14.616s 00:20:56.508 user 16m59.964s 00:20:56.508 sys 2m8.668s 00:20:56.508 08:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.508 08:21:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:56.508 08:21:01 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:56.508 08:21:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:56.508 08:21:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.508 08:21:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:56.508 ************************************ 00:20:56.508 START TEST nvmf_host 00:20:56.508 ************************************ 00:20:56.508 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:56.768 * Looking for test storage... 00:20:56.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.768 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:56.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.769 --rc genhtml_branch_coverage=1 00:20:56.769 --rc genhtml_function_coverage=1 00:20:56.769 --rc genhtml_legend=1 00:20:56.769 --rc geninfo_all_blocks=1 00:20:56.769 --rc geninfo_unexecuted_blocks=1 00:20:56.769 00:20:56.769 ' 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:56.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.769 --rc genhtml_branch_coverage=1 00:20:56.769 --rc genhtml_function_coverage=1 00:20:56.769 --rc genhtml_legend=1 00:20:56.769 --rc geninfo_all_blocks=1 00:20:56.769 --rc geninfo_unexecuted_blocks=1 00:20:56.769 00:20:56.769 ' 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:56.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.769 --rc genhtml_branch_coverage=1 00:20:56.769 --rc genhtml_function_coverage=1 00:20:56.769 --rc genhtml_legend=1 00:20:56.769 --rc geninfo_all_blocks=1 00:20:56.769 --rc geninfo_unexecuted_blocks=1 00:20:56.769 00:20:56.769 ' 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:56.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.769 --rc genhtml_branch_coverage=1 00:20:56.769 --rc genhtml_function_coverage=1 00:20:56.769 --rc genhtml_legend=1 00:20:56.769 --rc geninfo_all_blocks=1 00:20:56.769 --rc geninfo_unexecuted_blocks=1 00:20:56.769 00:20:56.769 ' 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.769 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.769 ************************************ 00:20:56.769 START TEST nvmf_identify 00:20:56.769 ************************************ 00:20:56.769 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:57.030 * Looking for test storage... 00:20:57.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:57.030 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:57.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.031 --rc genhtml_branch_coverage=1 00:20:57.031 --rc genhtml_function_coverage=1 00:20:57.031 --rc genhtml_legend=1 00:20:57.031 --rc geninfo_all_blocks=1 00:20:57.031 --rc geninfo_unexecuted_blocks=1 00:20:57.031 00:20:57.031 ' 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:57.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.031 --rc genhtml_branch_coverage=1 00:20:57.031 --rc genhtml_function_coverage=1 00:20:57.031 --rc genhtml_legend=1 00:20:57.031 --rc geninfo_all_blocks=1 00:20:57.031 --rc geninfo_unexecuted_blocks=1 00:20:57.031 00:20:57.031 ' 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:57.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.031 --rc genhtml_branch_coverage=1 00:20:57.031 --rc genhtml_function_coverage=1 00:20:57.031 --rc genhtml_legend=1 00:20:57.031 --rc geninfo_all_blocks=1 00:20:57.031 --rc geninfo_unexecuted_blocks=1 00:20:57.031 00:20:57.031 ' 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:57.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.031 --rc genhtml_branch_coverage=1 00:20:57.031 --rc genhtml_function_coverage=1 00:20:57.031 --rc genhtml_legend=1 00:20:57.031 --rc geninfo_all_blocks=1 00:20:57.031 --rc geninfo_unexecuted_blocks=1 00:20:57.031 00:20:57.031 ' 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.031 08:21:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:57.031 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:57.031 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:57.032 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:57.292 Cannot find device "nvmf_init_br" 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:57.292 Cannot find device "nvmf_init_br2" 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:57.292 Cannot find device "nvmf_tgt_br" 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:57.292 Cannot find device "nvmf_tgt_br2" 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:57.292 Cannot find device "nvmf_init_br" 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:57.292 Cannot find device "nvmf_init_br2" 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:57.292 Cannot find device "nvmf_tgt_br" 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:57.292 Cannot find device "nvmf_tgt_br2" 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:57.292 Cannot find device "nvmf_br" 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:57.292 Cannot find device "nvmf_init_if" 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:57.292 Cannot find device "nvmf_init_if2" 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:57.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:57.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:57.292 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:57.553 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:57.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:57.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:20:57.813 00:20:57.813 --- 10.0.0.3 ping statistics --- 00:20:57.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.813 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:57.813 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:57.813 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.100 ms 00:20:57.813 00:20:57.813 --- 10.0.0.4 ping statistics --- 00:20:57.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.813 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:57.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:20:57.813 00:20:57.813 --- 10.0.0.1 ping statistics --- 00:20:57.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.813 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:57.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:20:57.813 00:20:57.813 --- 10.0.0.2 ping statistics --- 00:20:57.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.813 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=79959 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 79959 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 79959 ']' 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.813 08:21:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:57.813 [2024-11-17 08:21:02.774988] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:57.813 [2024-11-17 08:21:02.775113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.073 [2024-11-17 08:21:02.965947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:58.332 [2024-11-17 08:21:03.118877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.332 [2024-11-17 08:21:03.118932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.332 [2024-11-17 08:21:03.118950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.332 [2024-11-17 08:21:03.118963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.332 [2024-11-17 08:21:03.118977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.332 [2024-11-17 08:21:03.121561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.332 [2024-11-17 08:21:03.121747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.332 [2024-11-17 08:21:03.121914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.332 [2024-11-17 08:21:03.122671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.592 [2024-11-17 08:21:03.378275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:58.592 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.592 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:20:58.592 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:58.592 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.592 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:58.592 [2024-11-17 08:21:03.583108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.851 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.851 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:58.851 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.851 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:58.851 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:58.851 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.851 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:58.851 Malloc0 00:20:58.851 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.851 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:58.851 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.851 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:58.852 [2024-11-17 08:21:03.785839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:58.852 [ 00:20:58.852 { 00:20:58.852 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:58.852 "subtype": "Discovery", 00:20:58.852 "listen_addresses": [ 00:20:58.852 { 00:20:58.852 "trtype": "TCP", 00:20:58.852 "adrfam": "IPv4", 00:20:58.852 "traddr": "10.0.0.3", 00:20:58.852 "trsvcid": "4420" 00:20:58.852 } 00:20:58.852 ], 00:20:58.852 "allow_any_host": true, 00:20:58.852 "hosts": [] 00:20:58.852 }, 00:20:58.852 { 00:20:58.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.852 "subtype": "NVMe", 00:20:58.852 "listen_addresses": [ 00:20:58.852 { 00:20:58.852 "trtype": "TCP", 00:20:58.852 "adrfam": "IPv4", 00:20:58.852 "traddr": "10.0.0.3", 00:20:58.852 "trsvcid": "4420" 00:20:58.852 } 00:20:58.852 ], 00:20:58.852 "allow_any_host": true, 00:20:58.852 "hosts": [], 00:20:58.852 "serial_number": "SPDK00000000000001", 00:20:58.852 "model_number": "SPDK bdev Controller", 00:20:58.852 "max_namespaces": 32, 00:20:58.852 "min_cntlid": 1, 00:20:58.852 "max_cntlid": 65519, 00:20:58.852 "namespaces": [ 00:20:58.852 { 00:20:58.852 "nsid": 1, 00:20:58.852 "bdev_name": "Malloc0", 00:20:58.852 "name": "Malloc0", 00:20:58.852 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:58.852 "eui64": "ABCDEF0123456789", 00:20:58.852 "uuid": "53addfdb-d7ab-43c4-80af-47b228860671" 00:20:58.852 } 00:20:58.852 ] 00:20:58.852 } 00:20:58.852 ] 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.852 08:21:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:59.115 [2024-11-17 08:21:03.886602] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:59.115 [2024-11-17 08:21:03.886698] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79994 ] 00:20:59.115 [2024-11-17 08:21:04.062865] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:20:59.115 [2024-11-17 08:21:04.063009] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:59.115 [2024-11-17 08:21:04.063020] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:59.115 [2024-11-17 08:21:04.063049] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:59.115 [2024-11-17 08:21:04.063064] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:59.115 [2024-11-17 08:21:04.063515] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:20:59.115 [2024-11-17 08:21:04.063606] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:59.115 [2024-11-17 08:21:04.068500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:59.115 [2024-11-17 08:21:04.068536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:59.115 [2024-11-17 08:21:04.068545] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:59.115 [2024-11-17 08:21:04.068556] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:59.115 [2024-11-17 08:21:04.068674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.068690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.068699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.115 [2024-11-17 08:21:04.068756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:59.115 [2024-11-17 08:21:04.068801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.115 [2024-11-17 08:21:04.076818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.115 [2024-11-17 08:21:04.076848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.115 [2024-11-17 08:21:04.076856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.076866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.115 [2024-11-17 08:21:04.076892] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:59.115 [2024-11-17 08:21:04.076908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:20:59.115 [2024-11-17 08:21:04.076918] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:20:59.115 [2024-11-17 08:21:04.076950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.076959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.076967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.115 [2024-11-17 08:21:04.076984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.115 [2024-11-17 08:21:04.077016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.115 [2024-11-17 08:21:04.077110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.115 [2024-11-17 08:21:04.077120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.115 [2024-11-17 08:21:04.077127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.077134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.115 [2024-11-17 08:21:04.077151] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:20:59.115 [2024-11-17 08:21:04.077163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:20:59.115 [2024-11-17 08:21:04.077174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.077181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.077188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.115 [2024-11-17 08:21:04.077206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.115 [2024-11-17 08:21:04.077228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.115 [2024-11-17 08:21:04.077294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.115 [2024-11-17 08:21:04.077302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.115 [2024-11-17 08:21:04.077308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.077314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.115 [2024-11-17 08:21:04.077327] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:20:59.115 [2024-11-17 08:21:04.077340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:59.115 [2024-11-17 08:21:04.077354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.077361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.077368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.115 [2024-11-17 08:21:04.077380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.115 [2024-11-17 08:21:04.077399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.115 [2024-11-17 08:21:04.077462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.115 [2024-11-17 08:21:04.077471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.115 [2024-11-17 08:21:04.077477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.077483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.115 [2024-11-17 08:21:04.077493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:59.115 [2024-11-17 08:21:04.077507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.077514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.115 [2024-11-17 08:21:04.077521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.115 [2024-11-17 08:21:04.077532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.115 [2024-11-17 08:21:04.077557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.115 [2024-11-17 08:21:04.077605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.115 [2024-11-17 08:21:04.077613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.115 [2024-11-17 08:21:04.077619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.077625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.116 [2024-11-17 08:21:04.077634] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:59.116 [2024-11-17 08:21:04.077644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:59.116 [2024-11-17 08:21:04.077655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:59.116 [2024-11-17 08:21:04.077774] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:20:59.116 [2024-11-17 08:21:04.077783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:59.116 [2024-11-17 08:21:04.077801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.077808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.077815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.116 [2024-11-17 08:21:04.077827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.116 [2024-11-17 08:21:04.077852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.116 [2024-11-17 08:21:04.077915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.116 [2024-11-17 08:21:04.077927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.116 [2024-11-17 08:21:04.077933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.077939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.116 [2024-11-17 08:21:04.077947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:59.116 [2024-11-17 08:21:04.077961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.077968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.077974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.116 [2024-11-17 08:21:04.077985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.116 [2024-11-17 08:21:04.078003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.116 [2024-11-17 08:21:04.078068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.116 [2024-11-17 08:21:04.078077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.116 [2024-11-17 08:21:04.078082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.116 [2024-11-17 08:21:04.078096] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:59.116 [2024-11-17 08:21:04.078105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:59.116 [2024-11-17 08:21:04.078127] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:20:59.116 [2024-11-17 08:21:04.078143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:59.116 [2024-11-17 08:21:04.078167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.116 [2024-11-17 08:21:04.078186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.116 [2024-11-17 08:21:04.078208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.116 [2024-11-17 08:21:04.078325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.116 [2024-11-17 08:21:04.078339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.116 [2024-11-17 08:21:04.078345] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078353] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:59.116 [2024-11-17 08:21:04.078362] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:59.116 [2024-11-17 08:21:04.078379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078392] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078400] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.116 [2024-11-17 08:21:04.078427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.116 [2024-11-17 08:21:04.078436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.116 [2024-11-17 08:21:04.078473] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:20:59.116 [2024-11-17 08:21:04.078482] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:20:59.116 [2024-11-17 08:21:04.078491] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:20:59.116 [2024-11-17 08:21:04.078500] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:20:59.116 [2024-11-17 08:21:04.078509] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:20:59.116 [2024-11-17 08:21:04.078518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:20:59.116 [2024-11-17 08:21:04.078531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:59.116 [2024-11-17 08:21:04.078543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.116 [2024-11-17 08:21:04.078575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:59.116 [2024-11-17 08:21:04.078603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.116 [2024-11-17 08:21:04.078653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.116 [2024-11-17 08:21:04.078662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.116 [2024-11-17 08:21:04.078667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.116 [2024-11-17 08:21:04.078690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.116 [2024-11-17 08:21:04.078722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.116 [2024-11-17 08:21:04.078732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:59.116 [2024-11-17 08:21:04.078753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.116 [2024-11-17 08:21:04.078762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:59.116 [2024-11-17 08:21:04.078782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.116 [2024-11-17 08:21:04.078791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.116 [2024-11-17 08:21:04.078819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.116 [2024-11-17 08:21:04.078827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:59.116 [2024-11-17 08:21:04.078850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:59.116 [2024-11-17 08:21:04.078860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.078870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:59.116 [2024-11-17 08:21:04.078881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.116 [2024-11-17 08:21:04.078907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.116 [2024-11-17 08:21:04.078915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:59.116 [2024-11-17 08:21:04.078922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:59.116 [2024-11-17 08:21:04.078929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.116 [2024-11-17 08:21:04.078935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:59.116 [2024-11-17 08:21:04.079019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.116 [2024-11-17 08:21:04.079027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.116 [2024-11-17 08:21:04.079033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.079039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:59.116 [2024-11-17 08:21:04.079050] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:20:59.116 [2024-11-17 08:21:04.079058] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:20:59.116 [2024-11-17 08:21:04.079081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.116 [2024-11-17 08:21:04.079088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:59.117 [2024-11-17 08:21:04.079100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.117 [2024-11-17 08:21:04.079123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:59.117 [2024-11-17 08:21:04.079206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.117 [2024-11-17 08:21:04.079216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.117 [2024-11-17 08:21:04.079226] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079233] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:59.117 [2024-11-17 08:21:04.079241] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:59.117 [2024-11-17 08:21:04.079249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079267] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079274] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.117 [2024-11-17 08:21:04.079297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.117 [2024-11-17 08:21:04.079304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:59.117 [2024-11-17 08:21:04.079336] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:20:59.117 [2024-11-17 08:21:04.079395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:59.117 [2024-11-17 08:21:04.079417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.117 [2024-11-17 08:21:04.079427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:59.117 [2024-11-17 08:21:04.079473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.117 [2024-11-17 08:21:04.079504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:59.117 [2024-11-17 08:21:04.079513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:59.117 [2024-11-17 08:21:04.079773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.117 [2024-11-17 08:21:04.079793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.117 [2024-11-17 08:21:04.079800] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079812] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:20:59.117 [2024-11-17 08:21:04.079820] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:20:59.117 [2024-11-17 08:21:04.079829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079839] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079846] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.117 [2024-11-17 08:21:04.079866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.117 [2024-11-17 08:21:04.079872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:59.117 [2024-11-17 08:21:04.079902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.117 [2024-11-17 08:21:04.079911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.117 [2024-11-17 08:21:04.079917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:59.117 [2024-11-17 08:21:04.079953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.079962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:59.117 [2024-11-17 08:21:04.079974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.117 [2024-11-17 08:21:04.080011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:59.117 [2024-11-17 08:21:04.080087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.117 [2024-11-17 08:21:04.080105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.117 [2024-11-17 08:21:04.080111] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.080117] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:20:59.117 [2024-11-17 08:21:04.080124] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:20:59.117 [2024-11-17 08:21:04.080131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.080140] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.080146] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.080157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.117 [2024-11-17 08:21:04.080165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.117 [2024-11-17 08:21:04.080171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.080178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:59.117 [2024-11-17 08:21:04.080200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.080207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:59.117 [2024-11-17 08:21:04.080219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.117 [2024-11-17 08:21:04.080242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:59.117 [2024-11-17 08:21:04.080343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.117 [2024-11-17 08:21:04.080352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.117 [2024-11-17 08:21:04.080358] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.080364] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:20:59.117 [2024-11-17 08:21:04.080371] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:20:59.117 [2024-11-17 08:21:04.080378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.080390] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.080396] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.080415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.117 [2024-11-17 08:21:04.080424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.117 [2024-11-17 08:21:04.080429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.117 [2024-11-17 08:21:04.080468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:59.117 ===================================================== 00:20:59.117 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:59.117 ===================================================== 00:20:59.117 Controller Capabilities/Features 00:20:59.117 ================================ 00:20:59.117 Vendor ID: 0000 00:20:59.117 Subsystem Vendor ID: 0000 00:20:59.117 Serial Number: .................... 00:20:59.117 Model Number: ........................................ 00:20:59.117 Firmware Version: 25.01 00:20:59.117 Recommended Arb Burst: 0 00:20:59.117 IEEE OUI Identifier: 00 00 00 00:20:59.117 Multi-path I/O 00:20:59.117 May have multiple subsystem ports: No 00:20:59.117 May have multiple controllers: No 00:20:59.117 Associated with SR-IOV VF: No 00:20:59.117 Max Data Transfer Size: 131072 00:20:59.117 Max Number of Namespaces: 0 00:20:59.117 Max Number of I/O Queues: 1024 00:20:59.117 NVMe Specification Version (VS): 1.3 00:20:59.117 NVMe Specification Version (Identify): 1.3 00:20:59.117 Maximum Queue Entries: 128 00:20:59.117 Contiguous Queues Required: Yes 00:20:59.117 Arbitration Mechanisms Supported 00:20:59.117 Weighted Round Robin: Not Supported 00:20:59.117 Vendor Specific: Not Supported 00:20:59.117 Reset Timeout: 15000 ms 00:20:59.117 Doorbell Stride: 4 bytes 00:20:59.117 NVM Subsystem Reset: Not Supported 00:20:59.117 Command Sets Supported 00:20:59.117 NVM Command Set: Supported 00:20:59.117 Boot Partition: Not Supported 00:20:59.117 Memory Page Size Minimum: 4096 bytes 00:20:59.117 Memory Page Size Maximum: 4096 bytes 00:20:59.117 Persistent Memory Region: Not Supported 00:20:59.117 Optional Asynchronous Events Supported 00:20:59.117 Namespace Attribute Notices: Not Supported 00:20:59.118 Firmware Activation Notices: Not Supported 00:20:59.118 ANA Change Notices: Not Supported 00:20:59.118 PLE Aggregate Log Change Notices: Not Supported 00:20:59.118 LBA Status Info Alert Notices: Not Supported 00:20:59.118 EGE Aggregate Log Change Notices: Not Supported 00:20:59.118 Normal NVM Subsystem Shutdown event: Not Supported 00:20:59.118 Zone Descriptor Change Notices: Not Supported 00:20:59.118 Discovery Log Change Notices: Supported 00:20:59.118 Controller Attributes 00:20:59.118 128-bit Host Identifier: Not Supported 00:20:59.118 Non-Operational Permissive Mode: Not Supported 00:20:59.118 NVM Sets: Not Supported 00:20:59.118 Read Recovery Levels: Not Supported 00:20:59.118 Endurance Groups: Not Supported 00:20:59.118 Predictable Latency Mode: Not Supported 00:20:59.118 Traffic Based Keep ALive: Not Supported 00:20:59.118 Namespace Granularity: Not Supported 00:20:59.118 SQ Associations: Not Supported 00:20:59.118 UUID List: Not Supported 00:20:59.118 Multi-Domain Subsystem: Not Supported 00:20:59.118 Fixed Capacity Management: Not Supported 00:20:59.118 Variable Capacity Management: Not Supported 00:20:59.118 Delete Endurance Group: Not Supported 00:20:59.118 Delete NVM Set: Not Supported 00:20:59.118 Extended LBA Formats Supported: Not Supported 00:20:59.118 Flexible Data Placement Supported: Not Supported 00:20:59.118 00:20:59.118 Controller Memory Buffer Support 00:20:59.118 ================================ 00:20:59.118 Supported: No 00:20:59.118 00:20:59.118 Persistent Memory Region Support 00:20:59.118 ================================ 00:20:59.118 Supported: No 00:20:59.118 00:20:59.118 Admin Command Set Attributes 00:20:59.118 ============================ 00:20:59.118 Security Send/Receive: Not Supported 00:20:59.118 Format NVM: Not Supported 00:20:59.118 Firmware Activate/Download: Not Supported 00:20:59.118 Namespace Management: Not Supported 00:20:59.118 Device Self-Test: Not Supported 00:20:59.118 Directives: Not Supported 00:20:59.118 NVMe-MI: Not Supported 00:20:59.118 Virtualization Management: Not Supported 00:20:59.118 Doorbell Buffer Config: Not Supported 00:20:59.118 Get LBA Status Capability: Not Supported 00:20:59.118 Command & Feature Lockdown Capability: Not Supported 00:20:59.118 Abort Command Limit: 1 00:20:59.118 Async Event Request Limit: 4 00:20:59.118 Number of Firmware Slots: N/A 00:20:59.118 Firmware Slot 1 Read-Only: N/A 00:20:59.118 Firmware Activation Without Reset: N/A 00:20:59.118 Multiple Update Detection Support: N/A 00:20:59.118 Firmware Update Granularity: No Information Provided 00:20:59.118 Per-Namespace SMART Log: No 00:20:59.118 Asymmetric Namespace Access Log Page: Not Supported 00:20:59.118 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:59.118 Command Effects Log Page: Not Supported 00:20:59.118 Get Log Page Extended Data: Supported 00:20:59.118 Telemetry Log Pages: Not Supported 00:20:59.118 Persistent Event Log Pages: Not Supported 00:20:59.118 Supported Log Pages Log Page: May Support 00:20:59.118 Commands Supported & Effects Log Page: Not Supported 00:20:59.118 Feature Identifiers & Effects Log Page:May Support 00:20:59.118 NVMe-MI Commands & Effects Log Page: May Support 00:20:59.118 Data Area 4 for Telemetry Log: Not Supported 00:20:59.118 Error Log Page Entries Supported: 128 00:20:59.118 Keep Alive: Not Supported 00:20:59.118 00:20:59.118 NVM Command Set Attributes 00:20:59.118 ========================== 00:20:59.118 Submission Queue Entry Size 00:20:59.118 Max: 1 00:20:59.118 Min: 1 00:20:59.118 Completion Queue Entry Size 00:20:59.118 Max: 1 00:20:59.118 Min: 1 00:20:59.118 Number of Namespaces: 0 00:20:59.118 Compare Command: Not Supported 00:20:59.118 Write Uncorrectable Command: Not Supported 00:20:59.118 Dataset Management Command: Not Supported 00:20:59.118 Write Zeroes Command: Not Supported 00:20:59.118 Set Features Save Field: Not Supported 00:20:59.118 Reservations: Not Supported 00:20:59.118 Timestamp: Not Supported 00:20:59.118 Copy: Not Supported 00:20:59.118 Volatile Write Cache: Not Present 00:20:59.118 Atomic Write Unit (Normal): 1 00:20:59.118 Atomic Write Unit (PFail): 1 00:20:59.118 Atomic Compare & Write Unit: 1 00:20:59.118 Fused Compare & Write: Supported 00:20:59.118 Scatter-Gather List 00:20:59.118 SGL Command Set: Supported 00:20:59.118 SGL Keyed: Supported 00:20:59.118 SGL Bit Bucket Descriptor: Not Supported 00:20:59.118 SGL Metadata Pointer: Not Supported 00:20:59.118 Oversized SGL: Not Supported 00:20:59.118 SGL Metadata Address: Not Supported 00:20:59.118 SGL Offset: Supported 00:20:59.118 Transport SGL Data Block: Not Supported 00:20:59.118 Replay Protected Memory Block: Not Supported 00:20:59.118 00:20:59.118 Firmware Slot Information 00:20:59.118 ========================= 00:20:59.118 Active slot: 0 00:20:59.118 00:20:59.118 00:20:59.118 Error Log 00:20:59.118 ========= 00:20:59.118 00:20:59.118 Active Namespaces 00:20:59.118 ================= 00:20:59.118 Discovery Log Page 00:20:59.118 ================== 00:20:59.118 Generation Counter: 2 00:20:59.118 Number of Records: 2 00:20:59.118 Record Format: 0 00:20:59.118 00:20:59.118 Discovery Log Entry 0 00:20:59.118 ---------------------- 00:20:59.118 Transport Type: 3 (TCP) 00:20:59.118 Address Family: 1 (IPv4) 00:20:59.118 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:59.118 Entry Flags: 00:20:59.118 Duplicate Returned Information: 1 00:20:59.118 Explicit Persistent Connection Support for Discovery: 1 00:20:59.118 Transport Requirements: 00:20:59.118 Secure Channel: Not Required 00:20:59.118 Port ID: 0 (0x0000) 00:20:59.118 Controller ID: 65535 (0xffff) 00:20:59.118 Admin Max SQ Size: 128 00:20:59.118 Transport Service Identifier: 4420 00:20:59.118 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:59.118 Transport Address: 10.0.0.3 00:20:59.118 Discovery Log Entry 1 00:20:59.118 ---------------------- 00:20:59.118 Transport Type: 3 (TCP) 00:20:59.118 Address Family: 1 (IPv4) 00:20:59.118 Subsystem Type: 2 (NVM Subsystem) 00:20:59.118 Entry Flags: 00:20:59.118 Duplicate Returned Information: 0 00:20:59.118 Explicit Persistent Connection Support for Discovery: 0 00:20:59.118 Transport Requirements: 00:20:59.118 Secure Channel: Not Required 00:20:59.118 Port ID: 0 (0x0000) 00:20:59.118 Controller ID: 65535 (0xffff) 00:20:59.118 Admin Max SQ Size: 128 00:20:59.118 Transport Service Identifier: 4420 00:20:59.118 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:59.118 Transport Address: 10.0.0.3 [2024-11-17 08:21:04.080669] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:20:59.118 [2024-11-17 08:21:04.080689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.118 [2024-11-17 08:21:04.080702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.118 [2024-11-17 08:21:04.080711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:59.118 [2024-11-17 08:21:04.080720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.118 [2024-11-17 08:21:04.080728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:59.118 [2024-11-17 08:21:04.080742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.118 [2024-11-17 08:21:04.080750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.118 [2024-11-17 08:21:04.080758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.118 [2024-11-17 08:21:04.080780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.118 [2024-11-17 08:21:04.080788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.118 [2024-11-17 08:21:04.080795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.118 [2024-11-17 08:21:04.080812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.118 [2024-11-17 08:21:04.080840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.118 [2024-11-17 08:21:04.080905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.118 [2024-11-17 08:21:04.080914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.118 [2024-11-17 08:21:04.080921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.118 [2024-11-17 08:21:04.080929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.118 [2024-11-17 08:21:04.080941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.118 [2024-11-17 08:21:04.080948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.118 [2024-11-17 08:21:04.080955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.119 [2024-11-17 08:21:04.080970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.119 [2024-11-17 08:21:04.080995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.119 [2024-11-17 08:21:04.081078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.119 [2024-11-17 08:21:04.081087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.119 [2024-11-17 08:21:04.081092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.119 [2024-11-17 08:21:04.081107] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:20:59.119 [2024-11-17 08:21:04.081116] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:20:59.119 [2024-11-17 08:21:04.081133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.119 [2024-11-17 08:21:04.081161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.119 [2024-11-17 08:21:04.081181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.119 [2024-11-17 08:21:04.081237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.119 [2024-11-17 08:21:04.081253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.119 [2024-11-17 08:21:04.081258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.119 [2024-11-17 08:21:04.081279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.119 [2024-11-17 08:21:04.081301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.119 [2024-11-17 08:21:04.081318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.119 [2024-11-17 08:21:04.081377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.119 [2024-11-17 08:21:04.081387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.119 [2024-11-17 08:21:04.081393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.119 [2024-11-17 08:21:04.081411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.119 [2024-11-17 08:21:04.081433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.119 [2024-11-17 08:21:04.081469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.119 [2024-11-17 08:21:04.081522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.119 [2024-11-17 08:21:04.081531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.119 [2024-11-17 08:21:04.081536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.119 [2024-11-17 08:21:04.081556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.119 [2024-11-17 08:21:04.081578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.119 [2024-11-17 08:21:04.081596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.119 [2024-11-17 08:21:04.081659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.119 [2024-11-17 08:21:04.081667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.119 [2024-11-17 08:21:04.081673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.119 [2024-11-17 08:21:04.081691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.119 [2024-11-17 08:21:04.081717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.119 [2024-11-17 08:21:04.081741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.119 [2024-11-17 08:21:04.081790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.119 [2024-11-17 08:21:04.081798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.119 [2024-11-17 08:21:04.081804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.119 [2024-11-17 08:21:04.081826] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.119 [2024-11-17 08:21:04.081847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.119 [2024-11-17 08:21:04.081865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.119 [2024-11-17 08:21:04.081908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.119 [2024-11-17 08:21:04.081922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.119 [2024-11-17 08:21:04.081927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.119 [2024-11-17 08:21:04.081946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.081957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.119 [2024-11-17 08:21:04.081967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.119 [2024-11-17 08:21:04.081984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.119 [2024-11-17 08:21:04.082037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.119 [2024-11-17 08:21:04.082045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.119 [2024-11-17 08:21:04.082050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.082056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.119 [2024-11-17 08:21:04.082069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.082075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.082082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.119 [2024-11-17 08:21:04.082092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.119 [2024-11-17 08:21:04.082109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.119 [2024-11-17 08:21:04.082170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.119 [2024-11-17 08:21:04.082178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.119 [2024-11-17 08:21:04.082184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.082190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.119 [2024-11-17 08:21:04.082206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.082213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.082219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.119 [2024-11-17 08:21:04.082236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.119 [2024-11-17 08:21:04.082254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.119 [2024-11-17 08:21:04.082306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.119 [2024-11-17 08:21:04.082314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.119 [2024-11-17 08:21:04.082320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.082326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.119 [2024-11-17 08:21:04.082342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.082349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.082354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.119 [2024-11-17 08:21:04.082364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.119 [2024-11-17 08:21:04.082391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.119 [2024-11-17 08:21:04.082453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.119 [2024-11-17 08:21:04.082463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.119 [2024-11-17 08:21:04.082469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.119 [2024-11-17 08:21:04.082478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.119 [2024-11-17 08:21:04.082492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.082498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.082503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.120 [2024-11-17 08:21:04.082513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.120 [2024-11-17 08:21:04.082534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.120 [2024-11-17 08:21:04.082589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.120 [2024-11-17 08:21:04.082601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.120 [2024-11-17 08:21:04.082607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.082613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.120 [2024-11-17 08:21:04.082626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.082632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.082638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.120 [2024-11-17 08:21:04.082647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.120 [2024-11-17 08:21:04.082665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.120 [2024-11-17 08:21:04.082717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.120 [2024-11-17 08:21:04.082725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.120 [2024-11-17 08:21:04.082731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.082737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.120 [2024-11-17 08:21:04.082750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.082756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.082762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.120 [2024-11-17 08:21:04.082771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.120 [2024-11-17 08:21:04.082789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.120 [2024-11-17 08:21:04.082844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.120 [2024-11-17 08:21:04.082853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.120 [2024-11-17 08:21:04.082858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.082864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.120 [2024-11-17 08:21:04.082877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.082883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.082889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.120 [2024-11-17 08:21:04.082902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.120 [2024-11-17 08:21:04.082920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.120 [2024-11-17 08:21:04.082969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.120 [2024-11-17 08:21:04.082978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.120 [2024-11-17 08:21:04.082983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.082989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.120 [2024-11-17 08:21:04.083002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.120 [2024-11-17 08:21:04.083030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.120 [2024-11-17 08:21:04.083047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.120 [2024-11-17 08:21:04.083090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.120 [2024-11-17 08:21:04.083098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.120 [2024-11-17 08:21:04.083104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.120 [2024-11-17 08:21:04.083126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.120 [2024-11-17 08:21:04.083148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.120 [2024-11-17 08:21:04.083165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.120 [2024-11-17 08:21:04.083216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.120 [2024-11-17 08:21:04.083230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.120 [2024-11-17 08:21:04.083236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.120 [2024-11-17 08:21:04.083254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.120 [2024-11-17 08:21:04.083278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.120 [2024-11-17 08:21:04.083296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.120 [2024-11-17 08:21:04.083354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.120 [2024-11-17 08:21:04.083363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.120 [2024-11-17 08:21:04.083368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.120 [2024-11-17 08:21:04.083387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.120 [2024-11-17 08:21:04.083410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.120 [2024-11-17 08:21:04.083428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.120 [2024-11-17 08:21:04.083502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.120 [2024-11-17 08:21:04.083511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.120 [2024-11-17 08:21:04.083517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.120 [2024-11-17 08:21:04.083536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.120 [2024-11-17 08:21:04.083552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.121 [2024-11-17 08:21:04.083563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.121 [2024-11-17 08:21:04.083582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.121 [2024-11-17 08:21:04.083630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.121 [2024-11-17 08:21:04.083638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.121 [2024-11-17 08:21:04.083644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.083650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.121 [2024-11-17 08:21:04.083666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.083672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.083677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.121 [2024-11-17 08:21:04.083687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.121 [2024-11-17 08:21:04.083709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.121 [2024-11-17 08:21:04.083766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.121 [2024-11-17 08:21:04.083774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.121 [2024-11-17 08:21:04.083785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.083792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.121 [2024-11-17 08:21:04.083804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.083811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.083816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.121 [2024-11-17 08:21:04.083825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.121 [2024-11-17 08:21:04.083843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.121 [2024-11-17 08:21:04.083897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.121 [2024-11-17 08:21:04.083905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.121 [2024-11-17 08:21:04.083911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.083917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.121 [2024-11-17 08:21:04.083929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.083936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.083941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.121 [2024-11-17 08:21:04.083950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.121 [2024-11-17 08:21:04.083968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.121 [2024-11-17 08:21:04.084011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.121 [2024-11-17 08:21:04.084020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.121 [2024-11-17 08:21:04.084025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.084031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.121 [2024-11-17 08:21:04.084044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.084050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.084055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.121 [2024-11-17 08:21:04.084065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.121 [2024-11-17 08:21:04.084083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.121 [2024-11-17 08:21:04.084132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.121 [2024-11-17 08:21:04.084140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.121 [2024-11-17 08:21:04.084145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.084151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.121 [2024-11-17 08:21:04.084163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.084170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.084176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.121 [2024-11-17 08:21:04.084190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.121 [2024-11-17 08:21:04.084207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.121 [2024-11-17 08:21:04.084273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.121 [2024-11-17 08:21:04.084282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.121 [2024-11-17 08:21:04.084297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.084303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.121 [2024-11-17 08:21:04.084321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.084328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.084333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.121 [2024-11-17 08:21:04.084352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.121 [2024-11-17 08:21:04.084373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.121 [2024-11-17 08:21:04.088467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.121 [2024-11-17 08:21:04.088501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.121 [2024-11-17 08:21:04.088508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.088515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.121 [2024-11-17 08:21:04.088556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.088565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.088571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.121 [2024-11-17 08:21:04.088584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.121 [2024-11-17 08:21:04.088613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.121 [2024-11-17 08:21:04.088693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.121 [2024-11-17 08:21:04.088702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.121 [2024-11-17 08:21:04.088708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.121 [2024-11-17 08:21:04.088713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.121 [2024-11-17 08:21:04.088724] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:20:59.381 00:20:59.381 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:59.381 [2024-11-17 08:21:04.209657] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:59.381 [2024-11-17 08:21:04.209737] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79998 ] 00:20:59.381 [2024-11-17 08:21:04.380992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:20:59.381 [2024-11-17 08:21:04.381128] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:59.381 [2024-11-17 08:21:04.381140] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:59.381 [2024-11-17 08:21:04.381168] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:59.381 [2024-11-17 08:21:04.381183] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:59.381 [2024-11-17 08:21:04.385589] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:20:59.381 [2024-11-17 08:21:04.385667] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:59.644 [2024-11-17 08:21:04.396487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:59.644 [2024-11-17 08:21:04.396517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:59.644 [2024-11-17 08:21:04.396526] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:59.644 [2024-11-17 08:21:04.396533] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:59.644 [2024-11-17 08:21:04.396643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.644 [2024-11-17 08:21:04.396654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.644 [2024-11-17 08:21:04.396664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.644 [2024-11-17 08:21:04.396685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:59.644 [2024-11-17 08:21:04.396720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.644 [2024-11-17 08:21:04.404506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.644 [2024-11-17 08:21:04.404532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.644 [2024-11-17 08:21:04.404540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.644 [2024-11-17 08:21:04.404548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.644 [2024-11-17 08:21:04.404575] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:59.644 [2024-11-17 08:21:04.404591] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:20:59.644 [2024-11-17 08:21:04.404602] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:20:59.644 [2024-11-17 08:21:04.404626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.644 [2024-11-17 08:21:04.404634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.404641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.645 [2024-11-17 08:21:04.404657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.645 [2024-11-17 08:21:04.404689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.645 [2024-11-17 08:21:04.404779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.645 [2024-11-17 08:21:04.404789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.645 [2024-11-17 08:21:04.404796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.404803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.645 [2024-11-17 08:21:04.404821] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:20:59.645 [2024-11-17 08:21:04.404834] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:20:59.645 [2024-11-17 08:21:04.404845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.404852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.404859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.645 [2024-11-17 08:21:04.404874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.645 [2024-11-17 08:21:04.404895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.645 [2024-11-17 08:21:04.404969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.645 [2024-11-17 08:21:04.404979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.645 [2024-11-17 08:21:04.404985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.404991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.645 [2024-11-17 08:21:04.405000] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:20:59.645 [2024-11-17 08:21:04.405013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:59.645 [2024-11-17 08:21:04.405023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.645 [2024-11-17 08:21:04.405052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.645 [2024-11-17 08:21:04.405074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.645 [2024-11-17 08:21:04.405122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.645 [2024-11-17 08:21:04.405130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.645 [2024-11-17 08:21:04.405136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.645 [2024-11-17 08:21:04.405151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:59.645 [2024-11-17 08:21:04.405179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.645 [2024-11-17 08:21:04.405207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.645 [2024-11-17 08:21:04.405227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.645 [2024-11-17 08:21:04.405269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.645 [2024-11-17 08:21:04.405277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.645 [2024-11-17 08:21:04.405282] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.645 [2024-11-17 08:21:04.405297] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:59.645 [2024-11-17 08:21:04.405313] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:59.645 [2024-11-17 08:21:04.405344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:59.645 [2024-11-17 08:21:04.405454] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:20:59.645 [2024-11-17 08:21:04.405462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:59.645 [2024-11-17 08:21:04.405492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.645 [2024-11-17 08:21:04.405519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.645 [2024-11-17 08:21:04.405541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.645 [2024-11-17 08:21:04.405599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.645 [2024-11-17 08:21:04.405609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.645 [2024-11-17 08:21:04.405614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.645 [2024-11-17 08:21:04.405630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:59.645 [2024-11-17 08:21:04.405648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.645 [2024-11-17 08:21:04.405674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.645 [2024-11-17 08:21:04.405694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.645 [2024-11-17 08:21:04.405755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.645 [2024-11-17 08:21:04.405764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.645 [2024-11-17 08:21:04.405770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.645 [2024-11-17 08:21:04.405784] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:59.645 [2024-11-17 08:21:04.405792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:59.645 [2024-11-17 08:21:04.405819] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:20:59.645 [2024-11-17 08:21:04.405832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:59.645 [2024-11-17 08:21:04.405849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.405862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.645 [2024-11-17 08:21:04.405874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.645 [2024-11-17 08:21:04.405896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.645 [2024-11-17 08:21:04.405999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.645 [2024-11-17 08:21:04.406011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.645 [2024-11-17 08:21:04.406017] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.406025] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:59.645 [2024-11-17 08:21:04.406034] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:59.645 [2024-11-17 08:21:04.406042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.406054] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.406064] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.406076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.645 [2024-11-17 08:21:04.406085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.645 [2024-11-17 08:21:04.406090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.406096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.645 [2024-11-17 08:21:04.406116] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:20:59.645 [2024-11-17 08:21:04.406125] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:20:59.645 [2024-11-17 08:21:04.406139] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:20:59.645 [2024-11-17 08:21:04.406147] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:20:59.645 [2024-11-17 08:21:04.406156] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:20:59.645 [2024-11-17 08:21:04.406164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:20:59.645 [2024-11-17 08:21:04.406177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:59.645 [2024-11-17 08:21:04.406189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.406202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.645 [2024-11-17 08:21:04.406209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.645 [2024-11-17 08:21:04.406222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:59.645 [2024-11-17 08:21:04.406242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.645 [2024-11-17 08:21:04.406294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.646 [2024-11-17 08:21:04.406309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.646 [2024-11-17 08:21:04.406315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.646 [2024-11-17 08:21:04.406342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:59.646 [2024-11-17 08:21:04.406382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.646 [2024-11-17 08:21:04.406393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:59.646 [2024-11-17 08:21:04.406414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.646 [2024-11-17 08:21:04.406422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:59.646 [2024-11-17 08:21:04.406461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.646 [2024-11-17 08:21:04.406470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.646 [2024-11-17 08:21:04.406493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.646 [2024-11-17 08:21:04.406501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:59.646 [2024-11-17 08:21:04.406524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:59.646 [2024-11-17 08:21:04.406534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:59.646 [2024-11-17 08:21:04.406553] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.646 [2024-11-17 08:21:04.406581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:59.646 [2024-11-17 08:21:04.406590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:59.646 [2024-11-17 08:21:04.406600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:59.646 [2024-11-17 08:21:04.406610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.646 [2024-11-17 08:21:04.406617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:59.646 [2024-11-17 08:21:04.406693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.646 [2024-11-17 08:21:04.406702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.646 [2024-11-17 08:21:04.406711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:59.646 [2024-11-17 08:21:04.406726] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:20:59.646 [2024-11-17 08:21:04.406739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:59.646 [2024-11-17 08:21:04.406754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:20:59.646 [2024-11-17 08:21:04.406765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:59.646 [2024-11-17 08:21:04.406775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:59.646 [2024-11-17 08:21:04.406801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:59.646 [2024-11-17 08:21:04.406821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:59.646 [2024-11-17 08:21:04.406874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.646 [2024-11-17 08:21:04.406882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.646 [2024-11-17 08:21:04.406888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.406893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:59.646 [2024-11-17 08:21:04.406979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:20:59.646 [2024-11-17 08:21:04.406997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:59.646 [2024-11-17 08:21:04.407012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:59.646 [2024-11-17 08:21:04.407037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.646 [2024-11-17 08:21:04.407059] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:59.646 [2024-11-17 08:21:04.407136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.646 [2024-11-17 08:21:04.407150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.646 [2024-11-17 08:21:04.407160] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407167] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:59.646 [2024-11-17 08:21:04.407176] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:59.646 [2024-11-17 08:21:04.407183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407197] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407204] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.646 [2024-11-17 08:21:04.407224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.646 [2024-11-17 08:21:04.407230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:59.646 [2024-11-17 08:21:04.407271] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:20:59.646 [2024-11-17 08:21:04.407293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:20:59.646 [2024-11-17 08:21:04.407318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:20:59.646 [2024-11-17 08:21:04.407332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407339] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:59.646 [2024-11-17 08:21:04.407355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.646 [2024-11-17 08:21:04.407376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:59.646 [2024-11-17 08:21:04.407474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.646 [2024-11-17 08:21:04.407484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.646 [2024-11-17 08:21:04.407490] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407496] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:59.646 [2024-11-17 08:21:04.407504] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:59.646 [2024-11-17 08:21:04.407511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407521] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407526] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.646 [2024-11-17 08:21:04.407548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.646 [2024-11-17 08:21:04.407554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:59.646 [2024-11-17 08:21:04.407598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:59.646 [2024-11-17 08:21:04.407622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:59.646 [2024-11-17 08:21:04.407638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:59.646 [2024-11-17 08:21:04.407657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.646 [2024-11-17 08:21:04.407678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:59.646 [2024-11-17 08:21:04.407751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.646 [2024-11-17 08:21:04.407759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.646 [2024-11-17 08:21:04.407765] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.646 [2024-11-17 08:21:04.407771] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:59.646 [2024-11-17 08:21:04.407778] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:59.646 [2024-11-17 08:21:04.407785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.407794] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.407806] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.407832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.647 [2024-11-17 08:21:04.407840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.647 [2024-11-17 08:21:04.407846] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.407851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:59.647 [2024-11-17 08:21:04.407883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:59.647 [2024-11-17 08:21:04.407895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:20:59.647 [2024-11-17 08:21:04.407915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:20:59.647 [2024-11-17 08:21:04.407927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:59.647 [2024-11-17 08:21:04.407936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:59.647 [2024-11-17 08:21:04.407945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:20:59.647 [2024-11-17 08:21:04.407954] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:20:59.647 [2024-11-17 08:21:04.407965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:20:59.647 [2024-11-17 08:21:04.407986] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:20:59.647 [2024-11-17 08:21:04.408031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.408039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:59.647 [2024-11-17 08:21:04.408051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.647 [2024-11-17 08:21:04.408062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.408069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.408075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:59.647 [2024-11-17 08:21:04.408085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:59.647 [2024-11-17 08:21:04.408111] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:59.647 [2024-11-17 08:21:04.408120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:59.647 [2024-11-17 08:21:04.408187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.647 [2024-11-17 08:21:04.408200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.647 [2024-11-17 08:21:04.408207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.408214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:59.647 [2024-11-17 08:21:04.408228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.647 [2024-11-17 08:21:04.408236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.647 [2024-11-17 08:21:04.408242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.408249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:59.647 [2024-11-17 08:21:04.408263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.408269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:59.647 [2024-11-17 08:21:04.408279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.647 [2024-11-17 08:21:04.408298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:59.647 [2024-11-17 08:21:04.408350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.647 [2024-11-17 08:21:04.408358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.647 [2024-11-17 08:21:04.408364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.408370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:59.647 [2024-11-17 08:21:04.408382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.408388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:59.647 [2024-11-17 08:21:04.408398] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.647 [2024-11-17 08:21:04.408415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:59.647 [2024-11-17 08:21:04.412478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.647 [2024-11-17 08:21:04.412503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.647 [2024-11-17 08:21:04.412510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.412516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:59.647 [2024-11-17 08:21:04.412541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.412548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:59.647 [2024-11-17 08:21:04.412565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.647 [2024-11-17 08:21:04.412593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:59.647 [2024-11-17 08:21:04.412679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.647 [2024-11-17 08:21:04.412691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.647 [2024-11-17 08:21:04.412696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.412702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:59.647 [2024-11-17 08:21:04.412732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.412740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:59.647 [2024-11-17 08:21:04.412752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.647 [2024-11-17 08:21:04.412763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.412770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:59.647 [2024-11-17 08:21:04.412780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.647 [2024-11-17 08:21:04.412798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.412805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:20:59.647 [2024-11-17 08:21:04.412819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.647 [2024-11-17 08:21:04.412834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.412840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:59.647 [2024-11-17 08:21:04.412851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.647 [2024-11-17 08:21:04.412871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:59.647 [2024-11-17 08:21:04.412880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:59.647 [2024-11-17 08:21:04.412887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:20:59.647 [2024-11-17 08:21:04.412893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:59.647 [2024-11-17 08:21:04.413050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.647 [2024-11-17 08:21:04.413060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.647 [2024-11-17 08:21:04.413066] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.413073] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:20:59.647 [2024-11-17 08:21:04.413080] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:20:59.647 [2024-11-17 08:21:04.413091] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.413122] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.413129] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.413138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.647 [2024-11-17 08:21:04.413145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.647 [2024-11-17 08:21:04.413150] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.413158] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:20:59.647 [2024-11-17 08:21:04.413165] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:59.647 [2024-11-17 08:21:04.413171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.413183] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.413189] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.413200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.647 [2024-11-17 08:21:04.413208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.647 [2024-11-17 08:21:04.413213] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.413219] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:20:59.647 [2024-11-17 08:21:04.413225] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:59.647 [2024-11-17 08:21:04.413232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.647 [2024-11-17 08:21:04.413243] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.648 [2024-11-17 08:21:04.413249] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.648 [2024-11-17 08:21:04.413261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:59.648 [2024-11-17 08:21:04.413269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:59.648 [2024-11-17 08:21:04.413274] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:59.648 [2024-11-17 08:21:04.413279] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:20:59.648 [2024-11-17 08:21:04.413286] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:59.648 [2024-11-17 08:21:04.413292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.648 [2024-11-17 08:21:04.413301] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:59.648 [2024-11-17 08:21:04.413306] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:59.648 [2024-11-17 08:21:04.413313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.648 [2024-11-17 08:21:04.413321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.648 [2024-11-17 08:21:04.413330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.648 [2024-11-17 08:21:04.413336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:59.648 [2024-11-17 08:21:04.413363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.648 [2024-11-17 08:21:04.413371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.648 [2024-11-17 08:21:04.413377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.648 [2024-11-17 08:21:04.413382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:59.648 [2024-11-17 08:21:04.413397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.648 [2024-11-17 08:21:04.413404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.648 [2024-11-17 08:21:04.413409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.648 [2024-11-17 08:21:04.413415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:20:59.648 [2024-11-17 08:21:04.413425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.648 [2024-11-17 08:21:04.413436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.648 [2024-11-17 08:21:04.413455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.648 [2024-11-17 08:21:04.413460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:59.648 ===================================================== 00:20:59.648 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:59.648 ===================================================== 00:20:59.648 Controller Capabilities/Features 00:20:59.648 ================================ 00:20:59.648 Vendor ID: 8086 00:20:59.648 Subsystem Vendor ID: 8086 00:20:59.648 Serial Number: SPDK00000000000001 00:20:59.648 Model Number: SPDK bdev Controller 00:20:59.648 Firmware Version: 25.01 00:20:59.648 Recommended Arb Burst: 6 00:20:59.648 IEEE OUI Identifier: e4 d2 5c 00:20:59.648 Multi-path I/O 00:20:59.648 May have multiple subsystem ports: Yes 00:20:59.648 May have multiple controllers: Yes 00:20:59.648 Associated with SR-IOV VF: No 00:20:59.648 Max Data Transfer Size: 131072 00:20:59.648 Max Number of Namespaces: 32 00:20:59.648 Max Number of I/O Queues: 127 00:20:59.648 NVMe Specification Version (VS): 1.3 00:20:59.648 NVMe Specification Version (Identify): 1.3 00:20:59.648 Maximum Queue Entries: 128 00:20:59.648 Contiguous Queues Required: Yes 00:20:59.648 Arbitration Mechanisms Supported 00:20:59.648 Weighted Round Robin: Not Supported 00:20:59.648 Vendor Specific: Not Supported 00:20:59.648 Reset Timeout: 15000 ms 00:20:59.648 Doorbell Stride: 4 bytes 00:20:59.648 NVM Subsystem Reset: Not Supported 00:20:59.648 Command Sets Supported 00:20:59.648 NVM Command Set: Supported 00:20:59.648 Boot Partition: Not Supported 00:20:59.648 Memory Page Size Minimum: 4096 bytes 00:20:59.648 Memory Page Size Maximum: 4096 bytes 00:20:59.648 Persistent Memory Region: Not Supported 00:20:59.648 Optional Asynchronous Events Supported 00:20:59.648 Namespace Attribute Notices: Supported 00:20:59.648 Firmware Activation Notices: Not Supported 00:20:59.648 ANA Change Notices: Not Supported 00:20:59.648 PLE Aggregate Log Change Notices: Not Supported 00:20:59.648 LBA Status Info Alert Notices: Not Supported 00:20:59.648 EGE Aggregate Log Change Notices: Not Supported 00:20:59.648 Normal NVM Subsystem Shutdown event: Not Supported 00:20:59.648 Zone Descriptor Change Notices: Not Supported 00:20:59.648 Discovery Log Change Notices: Not Supported 00:20:59.648 Controller Attributes 00:20:59.648 128-bit Host Identifier: Supported 00:20:59.648 Non-Operational Permissive Mode: Not Supported 00:20:59.648 NVM Sets: Not Supported 00:20:59.648 Read Recovery Levels: Not Supported 00:20:59.648 Endurance Groups: Not Supported 00:20:59.648 Predictable Latency Mode: Not Supported 00:20:59.648 Traffic Based Keep ALive: Not Supported 00:20:59.648 Namespace Granularity: Not Supported 00:20:59.648 SQ Associations: Not Supported 00:20:59.648 UUID List: Not Supported 00:20:59.648 Multi-Domain Subsystem: Not Supported 00:20:59.648 Fixed Capacity Management: Not Supported 00:20:59.648 Variable Capacity Management: Not Supported 00:20:59.648 Delete Endurance Group: Not Supported 00:20:59.648 Delete NVM Set: Not Supported 00:20:59.648 Extended LBA Formats Supported: Not Supported 00:20:59.648 Flexible Data Placement Supported: Not Supported 00:20:59.648 00:20:59.648 Controller Memory Buffer Support 00:20:59.648 ================================ 00:20:59.648 Supported: No 00:20:59.648 00:20:59.648 Persistent Memory Region Support 00:20:59.648 ================================ 00:20:59.648 Supported: No 00:20:59.648 00:20:59.648 Admin Command Set Attributes 00:20:59.648 ============================ 00:20:59.648 Security Send/Receive: Not Supported 00:20:59.648 Format NVM: Not Supported 00:20:59.648 Firmware Activate/Download: Not Supported 00:20:59.648 Namespace Management: Not Supported 00:20:59.648 Device Self-Test: Not Supported 00:20:59.648 Directives: Not Supported 00:20:59.648 NVMe-MI: Not Supported 00:20:59.648 Virtualization Management: Not Supported 00:20:59.648 Doorbell Buffer Config: Not Supported 00:20:59.648 Get LBA Status Capability: Not Supported 00:20:59.648 Command & Feature Lockdown Capability: Not Supported 00:20:59.648 Abort Command Limit: 4 00:20:59.648 Async Event Request Limit: 4 00:20:59.648 Number of Firmware Slots: N/A 00:20:59.648 Firmware Slot 1 Read-Only: N/A 00:20:59.648 Firmware Activation Without Reset: N/A 00:20:59.648 Multiple Update Detection Support: N/A 00:20:59.648 Firmware Update Granularity: No Information Provided 00:20:59.648 Per-Namespace SMART Log: No 00:20:59.648 Asymmetric Namespace Access Log Page: Not Supported 00:20:59.648 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:59.648 Command Effects Log Page: Supported 00:20:59.648 Get Log Page Extended Data: Supported 00:20:59.648 Telemetry Log Pages: Not Supported 00:20:59.648 Persistent Event Log Pages: Not Supported 00:20:59.648 Supported Log Pages Log Page: May Support 00:20:59.648 Commands Supported & Effects Log Page: Not Supported 00:20:59.648 Feature Identifiers & Effects Log Page:May Support 00:20:59.648 NVMe-MI Commands & Effects Log Page: May Support 00:20:59.648 Data Area 4 for Telemetry Log: Not Supported 00:20:59.648 Error Log Page Entries Supported: 128 00:20:59.648 Keep Alive: Supported 00:20:59.648 Keep Alive Granularity: 10000 ms 00:20:59.648 00:20:59.648 NVM Command Set Attributes 00:20:59.648 ========================== 00:20:59.648 Submission Queue Entry Size 00:20:59.648 Max: 64 00:20:59.648 Min: 64 00:20:59.648 Completion Queue Entry Size 00:20:59.648 Max: 16 00:20:59.648 Min: 16 00:20:59.648 Number of Namespaces: 32 00:20:59.648 Compare Command: Supported 00:20:59.648 Write Uncorrectable Command: Not Supported 00:20:59.648 Dataset Management Command: Supported 00:20:59.648 Write Zeroes Command: Supported 00:20:59.648 Set Features Save Field: Not Supported 00:20:59.648 Reservations: Supported 00:20:59.648 Timestamp: Not Supported 00:20:59.648 Copy: Supported 00:20:59.648 Volatile Write Cache: Present 00:20:59.648 Atomic Write Unit (Normal): 1 00:20:59.648 Atomic Write Unit (PFail): 1 00:20:59.648 Atomic Compare & Write Unit: 1 00:20:59.648 Fused Compare & Write: Supported 00:20:59.648 Scatter-Gather List 00:20:59.648 SGL Command Set: Supported 00:20:59.648 SGL Keyed: Supported 00:20:59.648 SGL Bit Bucket Descriptor: Not Supported 00:20:59.648 SGL Metadata Pointer: Not Supported 00:20:59.648 Oversized SGL: Not Supported 00:20:59.648 SGL Metadata Address: Not Supported 00:20:59.648 SGL Offset: Supported 00:20:59.648 Transport SGL Data Block: Not Supported 00:20:59.648 Replay Protected Memory Block: Not Supported 00:20:59.648 00:20:59.648 Firmware Slot Information 00:20:59.648 ========================= 00:20:59.648 Active slot: 1 00:20:59.649 Slot 1 Firmware Revision: 25.01 00:20:59.649 00:20:59.649 00:20:59.649 Commands Supported and Effects 00:20:59.649 ============================== 00:20:59.649 Admin Commands 00:20:59.649 -------------- 00:20:59.649 Get Log Page (02h): Supported 00:20:59.649 Identify (06h): Supported 00:20:59.649 Abort (08h): Supported 00:20:59.649 Set Features (09h): Supported 00:20:59.649 Get Features (0Ah): Supported 00:20:59.649 Asynchronous Event Request (0Ch): Supported 00:20:59.649 Keep Alive (18h): Supported 00:20:59.649 I/O Commands 00:20:59.649 ------------ 00:20:59.649 Flush (00h): Supported LBA-Change 00:20:59.649 Write (01h): Supported LBA-Change 00:20:59.649 Read (02h): Supported 00:20:59.649 Compare (05h): Supported 00:20:59.649 Write Zeroes (08h): Supported LBA-Change 00:20:59.649 Dataset Management (09h): Supported LBA-Change 00:20:59.649 Copy (19h): Supported LBA-Change 00:20:59.649 00:20:59.649 Error Log 00:20:59.649 ========= 00:20:59.649 00:20:59.649 Arbitration 00:20:59.649 =========== 00:20:59.649 Arbitration Burst: 1 00:20:59.649 00:20:59.649 Power Management 00:20:59.649 ================ 00:20:59.649 Number of Power States: 1 00:20:59.649 Current Power State: Power State #0 00:20:59.649 Power State #0: 00:20:59.649 Max Power: 0.00 W 00:20:59.649 Non-Operational State: Operational 00:20:59.649 Entry Latency: Not Reported 00:20:59.649 Exit Latency: Not Reported 00:20:59.649 Relative Read Throughput: 0 00:20:59.649 Relative Read Latency: 0 00:20:59.649 Relative Write Throughput: 0 00:20:59.649 Relative Write Latency: 0 00:20:59.649 Idle Power: Not Reported 00:20:59.649 Active Power: Not Reported 00:20:59.649 Non-Operational Permissive Mode: Not Supported 00:20:59.649 00:20:59.649 Health Information 00:20:59.649 ================== 00:20:59.649 Critical Warnings: 00:20:59.649 Available Spare Space: OK 00:20:59.649 Temperature: OK 00:20:59.649 Device Reliability: OK 00:20:59.649 Read Only: No 00:20:59.649 Volatile Memory Backup: OK 00:20:59.649 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:59.649 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:59.649 Available Spare: 0% 00:20:59.649 Available Spare Threshold: 0% 00:20:59.649 Life Percentage Used:[2024-11-17 08:21:04.413639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.413649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:59.649 [2024-11-17 08:21:04.413662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.649 [2024-11-17 08:21:04.413689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:59.649 [2024-11-17 08:21:04.413748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.649 [2024-11-17 08:21:04.413757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.649 [2024-11-17 08:21:04.413764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.413775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:59.649 [2024-11-17 08:21:04.413852] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:20:59.649 [2024-11-17 08:21:04.413878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:59.649 [2024-11-17 08:21:04.413890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.649 [2024-11-17 08:21:04.413899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:59.649 [2024-11-17 08:21:04.413907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.649 [2024-11-17 08:21:04.413915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:59.649 [2024-11-17 08:21:04.413923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.649 [2024-11-17 08:21:04.413931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.649 [2024-11-17 08:21:04.413943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:59.649 [2024-11-17 08:21:04.413956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.413963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.413969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.649 [2024-11-17 08:21:04.413980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.649 [2024-11-17 08:21:04.414006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.649 [2024-11-17 08:21:04.414055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.649 [2024-11-17 08:21:04.414068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.649 [2024-11-17 08:21:04.414078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.414085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.649 [2024-11-17 08:21:04.414098] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.414105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.414112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.649 [2024-11-17 08:21:04.414123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.649 [2024-11-17 08:21:04.414146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.649 [2024-11-17 08:21:04.414226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.649 [2024-11-17 08:21:04.414234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.649 [2024-11-17 08:21:04.414240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.414246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.649 [2024-11-17 08:21:04.414255] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:20:59.649 [2024-11-17 08:21:04.414264] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:20:59.649 [2024-11-17 08:21:04.414282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.414289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.414298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.649 [2024-11-17 08:21:04.414311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.649 [2024-11-17 08:21:04.414331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.649 [2024-11-17 08:21:04.414409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.649 [2024-11-17 08:21:04.414423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.649 [2024-11-17 08:21:04.414432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.414452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.649 [2024-11-17 08:21:04.414469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.414476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.649 [2024-11-17 08:21:04.414482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.649 [2024-11-17 08:21:04.414492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.650 [2024-11-17 08:21:04.414513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.650 [2024-11-17 08:21:04.414566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.650 [2024-11-17 08:21:04.414575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.650 [2024-11-17 08:21:04.414580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.414586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.650 [2024-11-17 08:21:04.414599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.414605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.414611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.650 [2024-11-17 08:21:04.414620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.650 [2024-11-17 08:21:04.414639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.650 [2024-11-17 08:21:04.414685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.650 [2024-11-17 08:21:04.414694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.650 [2024-11-17 08:21:04.414699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.414705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.650 [2024-11-17 08:21:04.414718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.414724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.414730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.650 [2024-11-17 08:21:04.414743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.650 [2024-11-17 08:21:04.414761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.650 [2024-11-17 08:21:04.414807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.650 [2024-11-17 08:21:04.414816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.650 [2024-11-17 08:21:04.414821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.414827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.650 [2024-11-17 08:21:04.414840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.414846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.414852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.650 [2024-11-17 08:21:04.414862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.650 [2024-11-17 08:21:04.414879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.650 [2024-11-17 08:21:04.414926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.650 [2024-11-17 08:21:04.414934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.650 [2024-11-17 08:21:04.414940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.414946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.650 [2024-11-17 08:21:04.414959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.414965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.414970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.650 [2024-11-17 08:21:04.414984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.650 [2024-11-17 08:21:04.415001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.650 [2024-11-17 08:21:04.415043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.650 [2024-11-17 08:21:04.415051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.650 [2024-11-17 08:21:04.415057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.650 [2024-11-17 08:21:04.415077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.650 [2024-11-17 08:21:04.415102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.650 [2024-11-17 08:21:04.415120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.650 [2024-11-17 08:21:04.415169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.650 [2024-11-17 08:21:04.415177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.650 [2024-11-17 08:21:04.415183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.650 [2024-11-17 08:21:04.415205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.650 [2024-11-17 08:21:04.415228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.650 [2024-11-17 08:21:04.415249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.650 [2024-11-17 08:21:04.415303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.650 [2024-11-17 08:21:04.415318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.650 [2024-11-17 08:21:04.415323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.650 [2024-11-17 08:21:04.415342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.650 [2024-11-17 08:21:04.415368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.650 [2024-11-17 08:21:04.415385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.650 [2024-11-17 08:21:04.415449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.650 [2024-11-17 08:21:04.415459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.650 [2024-11-17 08:21:04.415464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.650 [2024-11-17 08:21:04.415484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.650 [2024-11-17 08:21:04.415506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.650 [2024-11-17 08:21:04.415525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.650 [2024-11-17 08:21:04.415576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.650 [2024-11-17 08:21:04.415584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.650 [2024-11-17 08:21:04.415590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.650 [2024-11-17 08:21:04.415609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.650 [2024-11-17 08:21:04.415630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.650 [2024-11-17 08:21:04.415664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.650 [2024-11-17 08:21:04.415701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.650 [2024-11-17 08:21:04.415709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.650 [2024-11-17 08:21:04.415715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.650 [2024-11-17 08:21:04.415734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.650 [2024-11-17 08:21:04.415777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.650 [2024-11-17 08:21:04.415799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.650 [2024-11-17 08:21:04.415847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.650 [2024-11-17 08:21:04.415856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.650 [2024-11-17 08:21:04.415861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.650 [2024-11-17 08:21:04.415867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.651 [2024-11-17 08:21:04.415880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.415890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.415896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.651 [2024-11-17 08:21:04.415906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.651 [2024-11-17 08:21:04.415923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.651 [2024-11-17 08:21:04.416024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.651 [2024-11-17 08:21:04.416032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.651 [2024-11-17 08:21:04.416037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.416043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.651 [2024-11-17 08:21:04.416056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.416062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.416068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.651 [2024-11-17 08:21:04.416077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.651 [2024-11-17 08:21:04.416095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.651 [2024-11-17 08:21:04.416138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.651 [2024-11-17 08:21:04.416146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.651 [2024-11-17 08:21:04.416152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.416158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.651 [2024-11-17 08:21:04.416170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.416177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.416182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.651 [2024-11-17 08:21:04.416192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.651 [2024-11-17 08:21:04.416209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.651 [2024-11-17 08:21:04.416255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.651 [2024-11-17 08:21:04.416263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.651 [2024-11-17 08:21:04.416269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.416275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.651 [2024-11-17 08:21:04.416287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.416293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.416298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.651 [2024-11-17 08:21:04.416311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.651 [2024-11-17 08:21:04.416329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.651 [2024-11-17 08:21:04.416384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.651 [2024-11-17 08:21:04.416393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.651 [2024-11-17 08:21:04.416399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.416405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.651 [2024-11-17 08:21:04.416421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.416428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.416433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:59.651 [2024-11-17 08:21:04.420510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.651 [2024-11-17 08:21:04.420548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:59.651 [2024-11-17 08:21:04.420633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:59.651 [2024-11-17 08:21:04.420649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:59.651 [2024-11-17 08:21:04.420655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:59.651 [2024-11-17 08:21:04.420662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:59.651 [2024-11-17 08:21:04.420678] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:20:59.651 0% 00:20:59.651 Data Units Read: 0 00:20:59.651 Data Units Written: 0 00:20:59.651 Host Read Commands: 0 00:20:59.651 Host Write Commands: 0 00:20:59.651 Controller Busy Time: 0 minutes 00:20:59.651 Power Cycles: 0 00:20:59.651 Power On Hours: 0 hours 00:20:59.651 Unsafe Shutdowns: 0 00:20:59.651 Unrecoverable Media Errors: 0 00:20:59.651 Lifetime Error Log Entries: 0 00:20:59.651 Warning Temperature Time: 0 minutes 00:20:59.651 Critical Temperature Time: 0 minutes 00:20:59.651 00:20:59.651 Number of Queues 00:20:59.651 ================ 00:20:59.651 Number of I/O Submission Queues: 127 00:20:59.651 Number of I/O Completion Queues: 127 00:20:59.651 00:20:59.651 Active Namespaces 00:20:59.651 ================= 00:20:59.651 Namespace ID:1 00:20:59.651 Error Recovery Timeout: Unlimited 00:20:59.651 Command Set Identifier: NVM (00h) 00:20:59.651 Deallocate: Supported 00:20:59.651 Deallocated/Unwritten Error: Not Supported 00:20:59.651 Deallocated Read Value: Unknown 00:20:59.651 Deallocate in Write Zeroes: Not Supported 00:20:59.651 Deallocated Guard Field: 0xFFFF 00:20:59.651 Flush: Supported 00:20:59.651 Reservation: Supported 00:20:59.651 Namespace Sharing Capabilities: Multiple Controllers 00:20:59.651 Size (in LBAs): 131072 (0GiB) 00:20:59.651 Capacity (in LBAs): 131072 (0GiB) 00:20:59.651 Utilization (in LBAs): 131072 (0GiB) 00:20:59.651 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:59.651 EUI64: ABCDEF0123456789 00:20:59.651 UUID: 53addfdb-d7ab-43c4-80af-47b228860671 00:20:59.651 Thin Provisioning: Not Supported 00:20:59.651 Per-NS Atomic Units: Yes 00:20:59.651 Atomic Boundary Size (Normal): 0 00:20:59.651 Atomic Boundary Size (PFail): 0 00:20:59.651 Atomic Boundary Offset: 0 00:20:59.651 Maximum Single Source Range Length: 65535 00:20:59.651 Maximum Copy Length: 65535 00:20:59.651 Maximum Source Range Count: 1 00:20:59.651 NGUID/EUI64 Never Reused: No 00:20:59.651 Namespace Write Protected: No 00:20:59.651 Number of LBA Formats: 1 00:20:59.651 Current LBA Format: LBA Format #00 00:20:59.651 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:59.651 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.651 rmmod nvme_tcp 00:20:59.651 rmmod nvme_fabrics 00:20:59.651 rmmod nvme_keyring 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 79959 ']' 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 79959 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 79959 ']' 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 79959 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.651 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79959 00:20:59.911 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.911 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.911 killing process with pid 79959 00:20:59.911 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79959' 00:20:59.911 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 79959 00:20:59.911 08:21:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 79959 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:01.293 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:21:01.553 00:21:01.553 real 0m4.731s 00:21:01.553 user 0m11.660s 00:21:01.553 sys 0m1.411s 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:01.553 ************************************ 00:21:01.553 END TEST nvmf_identify 00:21:01.553 ************************************ 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.553 ************************************ 00:21:01.553 START TEST nvmf_perf 00:21:01.553 ************************************ 00:21:01.553 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:01.813 * Looking for test storage... 00:21:01.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:01.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.813 --rc genhtml_branch_coverage=1 00:21:01.813 --rc genhtml_function_coverage=1 00:21:01.813 --rc genhtml_legend=1 00:21:01.813 --rc geninfo_all_blocks=1 00:21:01.813 --rc geninfo_unexecuted_blocks=1 00:21:01.813 00:21:01.813 ' 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:01.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.813 --rc genhtml_branch_coverage=1 00:21:01.813 --rc genhtml_function_coverage=1 00:21:01.813 --rc genhtml_legend=1 00:21:01.813 --rc geninfo_all_blocks=1 00:21:01.813 --rc geninfo_unexecuted_blocks=1 00:21:01.813 00:21:01.813 ' 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:01.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.813 --rc genhtml_branch_coverage=1 00:21:01.813 --rc genhtml_function_coverage=1 00:21:01.813 --rc genhtml_legend=1 00:21:01.813 --rc geninfo_all_blocks=1 00:21:01.813 --rc geninfo_unexecuted_blocks=1 00:21:01.813 00:21:01.813 ' 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:01.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.813 --rc genhtml_branch_coverage=1 00:21:01.813 --rc genhtml_function_coverage=1 00:21:01.813 --rc genhtml_legend=1 00:21:01.813 --rc geninfo_all_blocks=1 00:21:01.813 --rc geninfo_unexecuted_blocks=1 00:21:01.813 00:21:01.813 ' 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.813 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:01.814 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.814 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:02.074 Cannot find device "nvmf_init_br" 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:02.074 Cannot find device "nvmf_init_br2" 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:02.074 Cannot find device "nvmf_tgt_br" 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:02.074 Cannot find device "nvmf_tgt_br2" 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:02.074 Cannot find device "nvmf_init_br" 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:02.074 Cannot find device "nvmf_init_br2" 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:02.074 Cannot find device "nvmf_tgt_br" 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:02.074 Cannot find device "nvmf_tgt_br2" 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:02.074 Cannot find device "nvmf_br" 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:02.074 Cannot find device "nvmf_init_if" 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:21:02.074 08:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:02.074 Cannot find device "nvmf_init_if2" 00:21:02.074 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:21:02.074 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:02.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:02.074 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:21:02.074 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:02.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:02.074 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:21:02.074 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:02.074 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:02.074 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:02.074 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:02.334 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:02.335 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:02.335 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.132 ms 00:21:02.335 00:21:02.335 --- 10.0.0.3 ping statistics --- 00:21:02.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.335 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:02.335 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:02.335 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:21:02.335 00:21:02.335 --- 10.0.0.4 ping statistics --- 00:21:02.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.335 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:02.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:21:02.335 00:21:02.335 --- 10.0.0.1 ping statistics --- 00:21:02.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.335 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:02.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:21:02.335 00:21:02.335 --- 10.0.0.2 ping statistics --- 00:21:02.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.335 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:02.335 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=80235 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 80235 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 80235 ']' 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.595 08:21:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:02.595 [2024-11-17 08:21:07.494980] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:21:02.595 [2024-11-17 08:21:07.495101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.854 [2024-11-17 08:21:07.682742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.854 [2024-11-17 08:21:07.822816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.854 [2024-11-17 08:21:07.822911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.854 [2024-11-17 08:21:07.822931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.854 [2024-11-17 08:21:07.822943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.854 [2024-11-17 08:21:07.822957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.854 [2024-11-17 08:21:07.825666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.854 [2024-11-17 08:21:07.825835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.854 [2024-11-17 08:21:07.826927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.854 [2024-11-17 08:21:07.826989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.113 [2024-11-17 08:21:08.073898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:03.373 08:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.373 08:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:03.373 08:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:03.373 08:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.373 08:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:03.373 08:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.373 08:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:03.373 08:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:21:04.000 08:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:21:04.000 08:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:04.000 08:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:21:04.000 08:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:04.259 08:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:04.259 08:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:21:04.259 08:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:04.259 08:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:04.259 08:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:04.518 [2024-11-17 08:21:09.372847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.518 08:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:04.777 08:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:04.777 08:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:04.777 08:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:04.777 08:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:05.036 08:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:05.296 [2024-11-17 08:21:10.129830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:05.296 08:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:05.555 08:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:05.555 08:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:05.555 08:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:05.555 08:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:06.933 Initializing NVMe Controllers 00:21:06.933 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:06.933 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:06.933 Initialization complete. Launching workers. 00:21:06.933 ======================================================== 00:21:06.933 Latency(us) 00:21:06.933 Device Information : IOPS MiB/s Average min max 00:21:06.933 PCIE (0000:00:10.0) NSID 1 from core 0: 17790.05 69.49 1799.12 264.91 9429.18 00:21:06.933 ======================================================== 00:21:06.933 Total : 17790.05 69.49 1799.12 264.91 9429.18 00:21:06.933 00:21:06.933 08:21:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:08.312 Initializing NVMe Controllers 00:21:08.312 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.312 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:08.312 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:08.312 Initialization complete. Launching workers. 00:21:08.312 ======================================================== 00:21:08.312 Latency(us) 00:21:08.312 Device Information : IOPS MiB/s Average min max 00:21:08.312 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3134.88 12.25 318.75 110.92 4277.16 00:21:08.312 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.80 0.49 8076.20 7956.23 12049.50 00:21:08.312 ======================================================== 00:21:08.312 Total : 3259.68 12.73 615.74 110.92 12049.50 00:21:08.312 00:21:08.312 08:21:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:09.691 Initializing NVMe Controllers 00:21:09.691 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:09.691 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:09.691 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:09.691 Initialization complete. Launching workers. 00:21:09.691 ======================================================== 00:21:09.691 Latency(us) 00:21:09.691 Device Information : IOPS MiB/s Average min max 00:21:09.691 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8578.97 33.51 3730.94 581.28 8032.21 00:21:09.691 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3999.99 15.62 8041.51 6793.82 12521.93 00:21:09.691 ======================================================== 00:21:09.691 Total : 12578.96 49.14 5101.66 581.28 12521.93 00:21:09.691 00:21:09.691 08:21:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:21:09.691 08:21:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:12.992 Initializing NVMe Controllers 00:21:12.992 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.992 Controller IO queue size 128, less than required. 00:21:12.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.992 Controller IO queue size 128, less than required. 00:21:12.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.992 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:12.992 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:12.992 Initialization complete. Launching workers. 00:21:12.992 ======================================================== 00:21:12.992 Latency(us) 00:21:12.992 Device Information : IOPS MiB/s Average min max 00:21:12.992 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1470.94 367.73 90554.92 40563.90 312503.38 00:21:12.992 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 605.95 151.49 224707.31 112152.45 544589.71 00:21:12.992 ======================================================== 00:21:12.992 Total : 2076.89 519.22 129694.79 40563.90 544589.71 00:21:12.992 00:21:12.992 08:21:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:21:12.992 Initializing NVMe Controllers 00:21:12.992 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.992 Controller IO queue size 128, less than required. 00:21:12.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.992 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:12.993 Controller IO queue size 128, less than required. 00:21:12.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.993 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:21:12.993 WARNING: Some requested NVMe devices were skipped 00:21:12.993 No valid NVMe controllers or AIO or URING devices found 00:21:12.993 08:21:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:21:16.287 Initializing NVMe Controllers 00:21:16.287 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:16.287 Controller IO queue size 128, less than required. 00:21:16.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:16.287 Controller IO queue size 128, less than required. 00:21:16.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:16.287 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:16.287 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:16.287 Initialization complete. Launching workers. 00:21:16.287 00:21:16.287 ==================== 00:21:16.287 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:16.287 TCP transport: 00:21:16.287 polls: 5121 00:21:16.287 idle_polls: 2489 00:21:16.287 sock_completions: 2632 00:21:16.287 nvme_completions: 4963 00:21:16.287 submitted_requests: 7440 00:21:16.287 queued_requests: 1 00:21:16.287 00:21:16.287 ==================== 00:21:16.287 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:16.287 TCP transport: 00:21:16.287 polls: 6348 00:21:16.287 idle_polls: 3680 00:21:16.287 sock_completions: 2668 00:21:16.287 nvme_completions: 4995 00:21:16.287 submitted_requests: 7434 00:21:16.287 queued_requests: 1 00:21:16.287 ======================================================== 00:21:16.287 Latency(us) 00:21:16.287 Device Information : IOPS MiB/s Average min max 00:21:16.287 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1240.49 310.12 112775.53 54811.57 472993.26 00:21:16.287 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1248.49 312.12 103223.77 54409.25 334512.72 00:21:16.287 ======================================================== 00:21:16.287 Total : 2488.99 622.25 107984.30 54409.25 472993.26 00:21:16.287 00:21:16.287 08:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:16.287 08:21:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:16.287 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:21:16.287 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:21:16.287 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:21:16.547 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=4c79eff6-1f3a-4220-80e5-073542aad70d 00:21:16.547 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 4c79eff6-1f3a-4220-80e5-073542aad70d 00:21:16.547 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=4c79eff6-1f3a-4220-80e5-073542aad70d 00:21:16.547 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:16.547 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:21:16.547 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:21:16.547 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:16.547 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:16.547 { 00:21:16.547 "uuid": "4c79eff6-1f3a-4220-80e5-073542aad70d", 00:21:16.547 "name": "lvs_0", 00:21:16.547 "base_bdev": "Nvme0n1", 00:21:16.547 "total_data_clusters": 1278, 00:21:16.547 "free_clusters": 1278, 00:21:16.547 "block_size": 4096, 00:21:16.547 "cluster_size": 4194304 00:21:16.547 } 00:21:16.547 ]' 00:21:16.548 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="4c79eff6-1f3a-4220-80e5-073542aad70d") .free_clusters' 00:21:16.820 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:21:16.820 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="4c79eff6-1f3a-4220-80e5-073542aad70d") .cluster_size' 00:21:16.820 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:21:16.820 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:21:16.820 5112 00:21:16.820 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:21:16.820 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:21:16.820 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4c79eff6-1f3a-4220-80e5-073542aad70d lbd_0 5112 00:21:16.820 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=9707ac09-997b-4e12-abe1-a4c9c49713fc 00:21:16.820 08:21:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 9707ac09-997b-4e12-abe1-a4c9c49713fc lvs_n_0 00:21:17.393 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=4d46d66d-383b-468d-b777-5635eccacaa4 00:21:17.393 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 4d46d66d-383b-468d-b777-5635eccacaa4 00:21:17.393 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=4d46d66d-383b-468d-b777-5635eccacaa4 00:21:17.393 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:17.393 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:21:17.393 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:21:17.393 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:17.393 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:17.393 { 00:21:17.393 "uuid": "4c79eff6-1f3a-4220-80e5-073542aad70d", 00:21:17.393 "name": "lvs_0", 00:21:17.393 "base_bdev": "Nvme0n1", 00:21:17.393 "total_data_clusters": 1278, 00:21:17.393 "free_clusters": 0, 00:21:17.393 "block_size": 4096, 00:21:17.393 "cluster_size": 4194304 00:21:17.393 }, 00:21:17.393 { 00:21:17.393 "uuid": "4d46d66d-383b-468d-b777-5635eccacaa4", 00:21:17.393 "name": "lvs_n_0", 00:21:17.393 "base_bdev": "9707ac09-997b-4e12-abe1-a4c9c49713fc", 00:21:17.393 "total_data_clusters": 1276, 00:21:17.393 "free_clusters": 1276, 00:21:17.393 "block_size": 4096, 00:21:17.393 "cluster_size": 4194304 00:21:17.393 } 00:21:17.393 ]' 00:21:17.393 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="4d46d66d-383b-468d-b777-5635eccacaa4") .free_clusters' 00:21:17.393 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:21:17.393 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="4d46d66d-383b-468d-b777-5635eccacaa4") .cluster_size' 00:21:17.652 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:21:17.652 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:21:17.652 5104 00:21:17.652 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:21:17.652 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:21:17.652 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4d46d66d-383b-468d-b777-5635eccacaa4 lbd_nest_0 5104 00:21:17.652 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=b1b39fc4-422d-4a3b-9062-b87989573099 00:21:17.652 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:17.912 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:21:17.912 08:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b1b39fc4-422d-4a3b-9062-b87989573099 00:21:18.171 08:21:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:18.430 08:21:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:21:18.430 08:21:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:21:18.430 08:21:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:18.430 08:21:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:18.430 08:21:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:18.690 Initializing NVMe Controllers 00:21:18.690 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.690 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:18.690 WARNING: Some requested NVMe devices were skipped 00:21:18.690 No valid NVMe controllers or AIO or URING devices found 00:21:18.690 08:21:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:18.690 08:21:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:30.903 Initializing NVMe Controllers 00:21:30.903 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.903 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:30.903 Initialization complete. Launching workers. 00:21:30.903 ======================================================== 00:21:30.903 Latency(us) 00:21:30.903 Device Information : IOPS MiB/s Average min max 00:21:30.903 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 814.24 101.78 1227.13 389.30 8191.28 00:21:30.903 ======================================================== 00:21:30.903 Total : 814.24 101.78 1227.13 389.30 8191.28 00:21:30.903 00:21:30.903 08:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:30.903 08:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:30.903 08:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:30.903 Initializing NVMe Controllers 00:21:30.903 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.903 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:30.903 WARNING: Some requested NVMe devices were skipped 00:21:30.903 No valid NVMe controllers or AIO or URING devices found 00:21:30.903 08:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:30.903 08:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:40.886 Initializing NVMe Controllers 00:21:40.886 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:40.886 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:40.886 Initialization complete. Launching workers. 00:21:40.886 ======================================================== 00:21:40.886 Latency(us) 00:21:40.886 Device Information : IOPS MiB/s Average min max 00:21:40.886 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1343.70 167.96 23837.49 5137.02 64002.67 00:21:40.886 ======================================================== 00:21:40.886 Total : 1343.70 167.96 23837.49 5137.02 64002.67 00:21:40.886 00:21:40.886 08:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:40.886 08:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:40.886 08:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:40.886 Initializing NVMe Controllers 00:21:40.886 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:40.886 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:40.886 WARNING: Some requested NVMe devices were skipped 00:21:40.886 No valid NVMe controllers or AIO or URING devices found 00:21:40.886 08:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:40.886 08:21:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:50.883 Initializing NVMe Controllers 00:21:50.883 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:50.883 Controller IO queue size 128, less than required. 00:21:50.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:50.883 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:50.883 Initialization complete. Launching workers. 00:21:50.883 ======================================================== 00:21:50.883 Latency(us) 00:21:50.883 Device Information : IOPS MiB/s Average min max 00:21:50.883 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3626.28 453.28 35339.33 13701.04 100030.09 00:21:50.883 ======================================================== 00:21:50.883 Total : 3626.28 453.28 35339.33 13701.04 100030.09 00:21:50.883 00:21:50.883 08:21:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:51.142 08:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b1b39fc4-422d-4a3b-9062-b87989573099 00:21:51.401 08:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:51.660 08:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9707ac09-997b-4e12-abe1-a4c9c49713fc 00:21:51.919 08:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:52.178 08:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:52.178 08:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:52.178 08:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.178 08:21:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.178 rmmod nvme_tcp 00:21:52.178 rmmod nvme_fabrics 00:21:52.178 rmmod nvme_keyring 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 80235 ']' 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 80235 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 80235 ']' 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 80235 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80235 00:21:52.178 killing process with pid 80235 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80235' 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 80235 00:21:52.178 08:21:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 80235 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:54.717 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:21:54.977 ************************************ 00:21:54.977 END TEST nvmf_perf 00:21:54.977 ************************************ 00:21:54.977 00:21:54.977 real 0m53.266s 00:21:54.977 user 3m14.630s 00:21:54.977 sys 0m14.227s 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.977 ************************************ 00:21:54.977 START TEST nvmf_fio_host 00:21:54.977 ************************************ 00:21:54.977 08:21:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:55.238 * Looking for test storage... 00:21:55.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:55.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.238 --rc genhtml_branch_coverage=1 00:21:55.238 --rc genhtml_function_coverage=1 00:21:55.238 --rc genhtml_legend=1 00:21:55.238 --rc geninfo_all_blocks=1 00:21:55.238 --rc geninfo_unexecuted_blocks=1 00:21:55.238 00:21:55.238 ' 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:55.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.238 --rc genhtml_branch_coverage=1 00:21:55.238 --rc genhtml_function_coverage=1 00:21:55.238 --rc genhtml_legend=1 00:21:55.238 --rc geninfo_all_blocks=1 00:21:55.238 --rc geninfo_unexecuted_blocks=1 00:21:55.238 00:21:55.238 ' 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:55.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.238 --rc genhtml_branch_coverage=1 00:21:55.238 --rc genhtml_function_coverage=1 00:21:55.238 --rc genhtml_legend=1 00:21:55.238 --rc geninfo_all_blocks=1 00:21:55.238 --rc geninfo_unexecuted_blocks=1 00:21:55.238 00:21:55.238 ' 00:21:55.238 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:55.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.239 --rc genhtml_branch_coverage=1 00:21:55.239 --rc genhtml_function_coverage=1 00:21:55.239 --rc genhtml_legend=1 00:21:55.239 --rc geninfo_all_blocks=1 00:21:55.239 --rc geninfo_unexecuted_blocks=1 00:21:55.239 00:21:55.239 ' 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.239 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:55.239 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:55.240 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:55.240 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.240 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:55.240 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:55.240 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:55.240 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:55.240 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:55.240 Cannot find device "nvmf_init_br" 00:21:55.240 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:21:55.240 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:55.240 Cannot find device "nvmf_init_br2" 00:21:55.240 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:21:55.240 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:55.538 Cannot find device "nvmf_tgt_br" 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:55.538 Cannot find device "nvmf_tgt_br2" 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:55.538 Cannot find device "nvmf_init_br" 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:55.538 Cannot find device "nvmf_init_br2" 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:55.538 Cannot find device "nvmf_tgt_br" 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:55.538 Cannot find device "nvmf_tgt_br2" 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:55.538 Cannot find device "nvmf_br" 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:55.538 Cannot find device "nvmf_init_if" 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:55.538 Cannot find device "nvmf_init_if2" 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:21:55.538 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:55.538 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:55.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:55.539 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:55.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:55.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:21:55.844 00:21:55.844 --- 10.0.0.3 ping statistics --- 00:21:55.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.844 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:55.844 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:55.844 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:21:55.844 00:21:55.844 --- 10.0.0.4 ping statistics --- 00:21:55.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.844 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:55.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:21:55.844 00:21:55.844 --- 10.0.0.1 ping statistics --- 00:21:55.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.844 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:55.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:21:55.844 00:21:55.844 --- 10.0.0.2 ping statistics --- 00:21:55.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.844 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=81145 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 81145 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 81145 ']' 00:21:55.844 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.845 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.845 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.845 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.845 08:22:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.104 [2024-11-17 08:22:00.841517] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:21:56.104 [2024-11-17 08:22:00.841640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.104 [2024-11-17 08:22:01.031892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:56.363 [2024-11-17 08:22:01.171048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.363 [2024-11-17 08:22:01.171537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.363 [2024-11-17 08:22:01.171657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.363 [2024-11-17 08:22:01.171736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.363 [2024-11-17 08:22:01.171755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.364 [2024-11-17 08:22:01.174223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.364 [2024-11-17 08:22:01.174544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.364 [2024-11-17 08:22:01.175567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.364 [2024-11-17 08:22:01.175605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.623 [2024-11-17 08:22:01.420835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:56.882 08:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.882 08:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:56.882 08:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:56.882 [2024-11-17 08:22:01.829406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.882 08:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:56.882 08:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.882 08:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.141 08:22:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:57.400 Malloc1 00:21:57.400 08:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:57.400 08:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:57.659 08:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:57.919 [2024-11-17 08:22:02.785096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:57.919 08:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:58.179 08:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:58.438 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:58.438 fio-3.35 00:21:58.438 Starting 1 thread 00:22:00.977 00:22:00.977 test: (groupid=0, jobs=1): err= 0: pid=81214: Sun Nov 17 08:22:05 2024 00:22:00.977 read: IOPS=8971, BW=35.0MiB/s (36.7MB/s)(70.3MiB/2007msec) 00:22:00.977 slat (nsec): min=1675, max=469622, avg=1930.55, stdev=4024.81 00:22:00.977 clat (usec): min=2815, max=13562, avg=7434.07, stdev=550.84 00:22:00.977 lat (usec): min=2873, max=13564, avg=7436.00, stdev=550.59 00:22:00.977 clat percentiles (usec): 00:22:00.977 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6849], 20.00th=[ 7046], 00:22:00.977 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7504], 00:22:00.977 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:22:00.977 | 99.00th=[ 8848], 99.50th=[ 9503], 99.90th=[11863], 99.95th=[12780], 00:22:00.977 | 99.99th=[13566] 00:22:00.977 bw ( KiB/s): min=34746, max=36344, per=99.94%, avg=35862.50, stdev=749.94, samples=4 00:22:00.977 iops : min= 8686, max= 9086, avg=8965.50, stdev=187.73, samples=4 00:22:00.977 write: IOPS=8992, BW=35.1MiB/s (36.8MB/s)(70.5MiB/2007msec); 0 zone resets 00:22:00.977 slat (nsec): min=1734, max=210702, avg=1982.55, stdev=1868.09 00:22:00.977 clat (usec): min=2621, max=13475, avg=6755.78, stdev=514.04 00:22:00.977 lat (usec): min=2641, max=13477, avg=6757.76, stdev=513.95 00:22:00.977 clat percentiles (usec): 00:22:00.977 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6390], 00:22:00.977 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6849], 00:22:00.977 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:22:00.977 | 99.00th=[ 7963], 99.50th=[ 8848], 99.90th=[11863], 99.95th=[12780], 00:22:00.977 | 99.99th=[13435] 00:22:00.977 bw ( KiB/s): min=35520, max=36352, per=99.96%, avg=35952.00, stdev=462.62, samples=4 00:22:00.977 iops : min= 8880, max= 9088, avg=8988.00, stdev=115.65, samples=4 00:22:00.977 lat (msec) : 4=0.11%, 10=99.64%, 20=0.25% 00:22:00.977 cpu : usr=67.25%, sys=26.47%, ctx=37, majf=0, minf=1556 00:22:00.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:00.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:00.977 issued rwts: total=18005,18047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:00.977 00:22:00.977 Run status group 0 (all jobs): 00:22:00.977 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.3MiB (73.7MB), run=2007-2007msec 00:22:00.977 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.5MiB (73.9MB), run=2007-2007msec 00:22:00.977 ----------------------------------------------------- 00:22:00.977 Suppressions used: 00:22:00.977 count bytes template 00:22:00.977 1 57 /usr/src/fio/parse.c 00:22:00.977 1 8 libtcmalloc_minimal.so 00:22:00.977 ----------------------------------------------------- 00:22:00.977 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:00.977 08:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:22:01.237 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:01.237 fio-3.35 00:22:01.237 Starting 1 thread 00:22:03.773 00:22:03.773 test: (groupid=0, jobs=1): err= 0: pid=81256: Sun Nov 17 08:22:08 2024 00:22:03.773 read: IOPS=7949, BW=124MiB/s (130MB/s)(249MiB/2006msec) 00:22:03.773 slat (nsec): min=2648, max=98079, avg=2970.45, stdev=1404.03 00:22:03.773 clat (usec): min=1555, max=18209, avg=9325.58, stdev=2331.62 00:22:03.773 lat (usec): min=1558, max=18212, avg=9328.55, stdev=2331.64 00:22:03.773 clat percentiles (usec): 00:22:03.773 | 1.00th=[ 4359], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7439], 00:22:03.773 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9765], 00:22:03.773 | 70.00th=[10290], 80.00th=[11076], 90.00th=[12387], 95.00th=[13042], 00:22:03.773 | 99.00th=[16188], 99.50th=[16450], 99.90th=[17171], 99.95th=[17695], 00:22:03.773 | 99.99th=[18220] 00:22:03.773 bw ( KiB/s): min=57600, max=73312, per=51.50%, avg=65504.75, stdev=7002.13, samples=4 00:22:03.773 iops : min= 3600, max= 4582, avg=4094.00, stdev=437.66, samples=4 00:22:03.773 write: IOPS=4710, BW=73.6MiB/s (77.2MB/s)(134MiB/1823msec); 0 zone resets 00:22:03.773 slat (usec): min=28, max=328, avg=30.01, stdev= 5.66 00:22:03.773 clat (usec): min=5590, max=20290, avg=11635.89, stdev=2128.91 00:22:03.773 lat (usec): min=5619, max=20319, avg=11665.91, stdev=2128.72 00:22:03.773 clat percentiles (usec): 00:22:03.773 | 1.00th=[ 7373], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[ 9896], 00:22:03.773 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11994], 00:22:03.773 | 70.00th=[12649], 80.00th=[13566], 90.00th=[14746], 95.00th=[15270], 00:22:03.773 | 99.00th=[16909], 99.50th=[17433], 99.90th=[20055], 99.95th=[20055], 00:22:03.773 | 99.99th=[20317] 00:22:03.773 bw ( KiB/s): min=60640, max=75680, per=90.75%, avg=68399.50, stdev=6960.51, samples=4 00:22:03.773 iops : min= 3790, max= 4730, avg=4274.75, stdev=435.19, samples=4 00:22:03.773 lat (msec) : 2=0.04%, 4=0.28%, 10=49.39%, 20=50.24%, 50=0.04% 00:22:03.773 cpu : usr=79.15%, sys=17.66%, ctx=17, majf=0, minf=2275 00:22:03.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:03.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:03.773 issued rwts: total=15946,8588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:03.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:03.773 00:22:03.773 Run status group 0 (all jobs): 00:22:03.773 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (261MB), run=2006-2006msec 00:22:03.773 WRITE: bw=73.6MiB/s (77.2MB/s), 73.6MiB/s-73.6MiB/s (77.2MB/s-77.2MB/s), io=134MiB (141MB), run=1823-1823msec 00:22:03.773 ----------------------------------------------------- 00:22:03.773 Suppressions used: 00:22:03.773 count bytes template 00:22:03.773 1 57 /usr/src/fio/parse.c 00:22:03.773 878 84288 /usr/src/fio/iolog.c 00:22:03.773 1 8 libtcmalloc_minimal.so 00:22:03.773 ----------------------------------------------------- 00:22:03.773 00:22:03.773 08:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:04.032 08:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:22:04.032 08:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:22:04.032 08:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:22:04.032 08:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:22:04.032 08:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:22:04.032 08:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:04.032 08:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:04.032 08:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:22:04.032 08:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:22:04.032 08:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:04.032 08:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:22:04.292 Nvme0n1 00:22:04.292 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:22:04.551 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=2b7cd1a0-ca22-4fb9-af1b-ba5977223dd8 00:22:04.551 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 2b7cd1a0-ca22-4fb9-af1b-ba5977223dd8 00:22:04.551 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=2b7cd1a0-ca22-4fb9-af1b-ba5977223dd8 00:22:04.551 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:22:04.551 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:22:04.551 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:22:04.551 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:04.809 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:22:04.809 { 00:22:04.809 "uuid": "2b7cd1a0-ca22-4fb9-af1b-ba5977223dd8", 00:22:04.809 "name": "lvs_0", 00:22:04.809 "base_bdev": "Nvme0n1", 00:22:04.809 "total_data_clusters": 4, 00:22:04.809 "free_clusters": 4, 00:22:04.809 "block_size": 4096, 00:22:04.809 "cluster_size": 1073741824 00:22:04.809 } 00:22:04.809 ]' 00:22:04.809 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="2b7cd1a0-ca22-4fb9-af1b-ba5977223dd8") .free_clusters' 00:22:04.809 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:22:04.809 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="2b7cd1a0-ca22-4fb9-af1b-ba5977223dd8") .cluster_size' 00:22:04.809 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:22:04.809 4096 00:22:04.809 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:22:04.809 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:22:04.809 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:22:05.068 7cc2ce8e-1027-4408-9f26-335c23f87b39 00:22:05.068 08:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:22:05.328 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:22:05.328 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:05.587 08:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:05.845 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:05.845 fio-3.35 00:22:05.845 Starting 1 thread 00:22:08.381 00:22:08.381 test: (groupid=0, jobs=1): err= 0: pid=81359: Sun Nov 17 08:22:13 2024 00:22:08.381 read: IOPS=6025, BW=23.5MiB/s (24.7MB/s)(47.3MiB/2010msec) 00:22:08.381 slat (nsec): min=1726, max=466723, avg=2032.03, stdev=5361.63 00:22:08.381 clat (usec): min=3755, max=19263, avg=11135.73, stdev=914.93 00:22:08.381 lat (usec): min=3767, max=19265, avg=11137.76, stdev=914.46 00:22:08.381 clat percentiles (usec): 00:22:08.381 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:22:08.381 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:22:08.381 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:22:08.381 | 99.00th=[13173], 99.50th=[13698], 99.90th=[16909], 99.95th=[18220], 00:22:08.381 | 99.99th=[19268] 00:22:08.381 bw ( KiB/s): min=23304, max=24728, per=99.96%, avg=24092.00, stdev=615.72, samples=4 00:22:08.381 iops : min= 5826, max= 6182, avg=6023.00, stdev=153.93, samples=4 00:22:08.381 write: IOPS=6006, BW=23.5MiB/s (24.6MB/s)(47.2MiB/2010msec); 0 zone resets 00:22:08.381 slat (nsec): min=1773, max=352613, avg=2085.28, stdev=3495.65 00:22:08.381 clat (usec): min=3257, max=18303, avg=10037.09, stdev=893.15 00:22:08.381 lat (usec): min=3274, max=18305, avg=10039.17, stdev=892.86 00:22:08.381 clat percentiles (usec): 00:22:08.381 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9372], 00:22:08.381 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:22:08.381 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:22:08.381 | 99.00th=[11994], 99.50th=[12387], 99.90th=[17957], 99.95th=[17957], 00:22:08.381 | 99.99th=[18220] 00:22:08.381 bw ( KiB/s): min=23616, max=24408, per=99.98%, avg=24022.00, stdev=371.66, samples=4 00:22:08.381 iops : min= 5904, max= 6102, avg=6005.50, stdev=92.91, samples=4 00:22:08.381 lat (msec) : 4=0.03%, 10=28.58%, 20=71.39% 00:22:08.381 cpu : usr=71.48%, sys=24.29%, ctx=27, majf=0, minf=1557 00:22:08.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:22:08.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:08.381 issued rwts: total=12111,12073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:08.381 00:22:08.381 Run status group 0 (all jobs): 00:22:08.381 READ: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=47.3MiB (49.6MB), run=2010-2010msec 00:22:08.381 WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=47.2MiB (49.5MB), run=2010-2010msec 00:22:08.382 ----------------------------------------------------- 00:22:08.382 Suppressions used: 00:22:08.382 count bytes template 00:22:08.382 1 58 /usr/src/fio/parse.c 00:22:08.382 1 8 libtcmalloc_minimal.so 00:22:08.382 ----------------------------------------------------- 00:22:08.382 00:22:08.382 08:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:08.641 08:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:22:08.900 08:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=f3813208-d64e-4e2c-b0ff-373517c5e096 00:22:08.900 08:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb f3813208-d64e-4e2c-b0ff-373517c5e096 00:22:08.900 08:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=f3813208-d64e-4e2c-b0ff-373517c5e096 00:22:08.900 08:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:22:08.900 08:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:22:08.900 08:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:22:08.900 08:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:09.160 08:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:22:09.160 { 00:22:09.160 "uuid": "2b7cd1a0-ca22-4fb9-af1b-ba5977223dd8", 00:22:09.160 "name": "lvs_0", 00:22:09.160 "base_bdev": "Nvme0n1", 00:22:09.160 "total_data_clusters": 4, 00:22:09.160 "free_clusters": 0, 00:22:09.160 "block_size": 4096, 00:22:09.160 "cluster_size": 1073741824 00:22:09.160 }, 00:22:09.160 { 00:22:09.160 "uuid": "f3813208-d64e-4e2c-b0ff-373517c5e096", 00:22:09.160 "name": "lvs_n_0", 00:22:09.160 "base_bdev": "7cc2ce8e-1027-4408-9f26-335c23f87b39", 00:22:09.160 "total_data_clusters": 1022, 00:22:09.160 "free_clusters": 1022, 00:22:09.160 "block_size": 4096, 00:22:09.160 "cluster_size": 4194304 00:22:09.160 } 00:22:09.160 ]' 00:22:09.160 08:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="f3813208-d64e-4e2c-b0ff-373517c5e096") .free_clusters' 00:22:09.160 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:22:09.160 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="f3813208-d64e-4e2c-b0ff-373517c5e096") .cluster_size' 00:22:09.160 4088 00:22:09.160 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:22:09.160 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:22:09.160 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:22:09.160 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:22:09.419 9d03987e-4b9f-401f-9db4-30650138ae18 00:22:09.419 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:22:09.419 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:22:09.678 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:09.938 08:22:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:10.197 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:10.197 fio-3.35 00:22:10.197 Starting 1 thread 00:22:12.733 00:22:12.733 test: (groupid=0, jobs=1): err= 0: pid=81430: Sun Nov 17 08:22:17 2024 00:22:12.733 read: IOPS=6172, BW=24.1MiB/s (25.3MB/s)(48.4MiB/2009msec) 00:22:12.733 slat (nsec): min=1697, max=270602, avg=1950.13, stdev=3245.93 00:22:12.733 clat (usec): min=3211, max=19296, avg=10873.18, stdev=922.58 00:22:12.733 lat (usec): min=3219, max=19298, avg=10875.13, stdev=922.35 00:22:12.733 clat percentiles (usec): 00:22:12.733 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:22:12.733 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:22:12.733 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:22:12.733 | 99.00th=[13173], 99.50th=[13566], 99.90th=[17957], 99.95th=[18220], 00:22:12.733 | 99.99th=[19268] 00:22:12.733 bw ( KiB/s): min=23584, max=25144, per=99.89%, avg=24662.00, stdev=725.32, samples=4 00:22:12.733 iops : min= 5896, max= 6286, avg=6165.50, stdev=181.33, samples=4 00:22:12.733 write: IOPS=6156, BW=24.0MiB/s (25.2MB/s)(48.3MiB/2009msec); 0 zone resets 00:22:12.733 slat (nsec): min=1753, max=195726, avg=1997.91, stdev=2001.27 00:22:12.733 clat (usec): min=2136, max=19421, avg=9793.79, stdev=851.77 00:22:12.733 lat (usec): min=2149, max=19423, avg=9795.79, stdev=851.68 00:22:12.733 clat percentiles (usec): 00:22:12.733 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:22:12.733 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:22:12.733 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:22:12.733 | 99.00th=[11731], 99.50th=[12125], 99.90th=[16188], 99.95th=[17433], 00:22:12.733 | 99.99th=[19268] 00:22:12.733 bw ( KiB/s): min=24576, max=24688, per=99.98%, avg=24622.00, stdev=55.57, samples=4 00:22:12.733 iops : min= 6144, max= 6172, avg=6155.50, stdev=13.89, samples=4 00:22:12.733 lat (msec) : 4=0.05%, 10=37.50%, 20=62.44% 00:22:12.733 cpu : usr=69.82%, sys=25.85%, ctx=7, majf=0, minf=1556 00:22:12.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:22:12.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:12.733 issued rwts: total=12400,12369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:12.733 00:22:12.733 Run status group 0 (all jobs): 00:22:12.733 READ: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=48.4MiB (50.8MB), run=2009-2009msec 00:22:12.733 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=48.3MiB (50.7MB), run=2009-2009msec 00:22:12.733 ----------------------------------------------------- 00:22:12.733 Suppressions used: 00:22:12.733 count bytes template 00:22:12.733 1 58 /usr/src/fio/parse.c 00:22:12.733 1 8 libtcmalloc_minimal.so 00:22:12.733 ----------------------------------------------------- 00:22:12.733 00:22:12.733 08:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:12.992 08:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:22:12.992 08:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:22:13.250 08:22:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:22:13.508 08:22:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:22:13.508 08:22:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:22:13.768 08:22:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.369 rmmod nvme_tcp 00:22:14.369 rmmod nvme_fabrics 00:22:14.369 rmmod nvme_keyring 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 81145 ']' 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 81145 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 81145 ']' 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 81145 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.369 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81145 00:22:14.628 killing process with pid 81145 00:22:14.628 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.628 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.628 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81145' 00:22:14.628 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 81145 00:22:14.628 08:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 81145 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:16.008 08:22:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:22:16.269 ************************************ 00:22:16.269 END TEST nvmf_fio_host 00:22:16.269 ************************************ 00:22:16.269 00:22:16.269 real 0m21.215s 00:22:16.269 user 1m27.732s 00:22:16.269 sys 0m5.725s 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.269 ************************************ 00:22:16.269 START TEST nvmf_failover 00:22:16.269 ************************************ 00:22:16.269 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:16.530 * Looking for test storage... 00:22:16.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.530 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:16.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.530 --rc genhtml_branch_coverage=1 00:22:16.530 --rc genhtml_function_coverage=1 00:22:16.530 --rc genhtml_legend=1 00:22:16.530 --rc geninfo_all_blocks=1 00:22:16.530 --rc geninfo_unexecuted_blocks=1 00:22:16.530 00:22:16.530 ' 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:16.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.531 --rc genhtml_branch_coverage=1 00:22:16.531 --rc genhtml_function_coverage=1 00:22:16.531 --rc genhtml_legend=1 00:22:16.531 --rc geninfo_all_blocks=1 00:22:16.531 --rc geninfo_unexecuted_blocks=1 00:22:16.531 00:22:16.531 ' 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:16.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.531 --rc genhtml_branch_coverage=1 00:22:16.531 --rc genhtml_function_coverage=1 00:22:16.531 --rc genhtml_legend=1 00:22:16.531 --rc geninfo_all_blocks=1 00:22:16.531 --rc geninfo_unexecuted_blocks=1 00:22:16.531 00:22:16.531 ' 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:16.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.531 --rc genhtml_branch_coverage=1 00:22:16.531 --rc genhtml_function_coverage=1 00:22:16.531 --rc genhtml_legend=1 00:22:16.531 --rc geninfo_all_blocks=1 00:22:16.531 --rc geninfo_unexecuted_blocks=1 00:22:16.531 00:22:16.531 ' 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:16.531 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:16.531 Cannot find device "nvmf_init_br" 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:22:16.531 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:16.792 Cannot find device "nvmf_init_br2" 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:16.792 Cannot find device "nvmf_tgt_br" 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:16.792 Cannot find device "nvmf_tgt_br2" 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:16.792 Cannot find device "nvmf_init_br" 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:16.792 Cannot find device "nvmf_init_br2" 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:16.792 Cannot find device "nvmf_tgt_br" 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:16.792 Cannot find device "nvmf_tgt_br2" 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:16.792 Cannot find device "nvmf_br" 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:16.792 Cannot find device "nvmf_init_if" 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:16.792 Cannot find device "nvmf_init_if2" 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:16.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:16.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:16.792 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:17.052 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:17.052 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:22:17.052 00:22:17.052 --- 10.0.0.3 ping statistics --- 00:22:17.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.052 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:17.052 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:17.052 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.132 ms 00:22:17.052 00:22:17.052 --- 10.0.0.4 ping statistics --- 00:22:17.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.052 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:22:17.052 08:22:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:17.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:22:17.052 00:22:17.052 --- 10.0.0.1 ping statistics --- 00:22:17.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.052 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:17.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:22:17.052 00:22:17.052 --- 10.0.0.2 ping statistics --- 00:22:17.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.052 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=81740 00:22:17.052 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:17.053 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 81740 00:22:17.053 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 81740 ']' 00:22:17.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.053 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.053 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.053 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.053 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.053 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:17.312 [2024-11-17 08:22:22.170120] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:17.312 [2024-11-17 08:22:22.170243] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.572 [2024-11-17 08:22:22.357976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:17.572 [2024-11-17 08:22:22.467808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.572 [2024-11-17 08:22:22.467863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.572 [2024-11-17 08:22:22.467878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.572 [2024-11-17 08:22:22.467888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.572 [2024-11-17 08:22:22.467902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.572 [2024-11-17 08:22:22.469805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.572 [2024-11-17 08:22:22.469922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.572 [2024-11-17 08:22:22.469887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.831 [2024-11-17 08:22:22.666052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:18.090 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.091 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:18.091 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:18.091 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.091 08:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:18.091 08:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.091 08:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:18.350 [2024-11-17 08:22:23.203875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.350 08:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:18.610 Malloc0 00:22:18.610 08:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:18.869 08:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:19.129 08:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:19.129 [2024-11-17 08:22:24.094111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:19.129 08:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:19.388 [2024-11-17 08:22:24.302174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:19.388 08:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:19.647 [2024-11-17 08:22:24.510341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:22:19.647 08:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=81798 00:22:19.647 08:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.647 08:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 81798 /var/tmp/bdevperf.sock 00:22:19.647 08:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 81798 ']' 00:22:19.647 08:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:19.647 08:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.647 08:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.647 08:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.647 08:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.647 08:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:20.586 08:22:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.586 08:22:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:20.586 08:22:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:20.846 NVMe0n1 00:22:20.846 08:22:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:21.105 00:22:21.105 08:22:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=81817 00:22:21.105 08:22:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:21.105 08:22:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:22.054 08:22:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:22.329 08:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:25.620 08:22:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:25.620 00:22:25.620 08:22:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:25.880 08:22:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:29.172 08:22:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:29.172 [2024-11-17 08:22:33.988918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:29.172 08:22:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:30.110 08:22:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:30.369 08:22:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 81817 00:22:36.940 { 00:22:36.940 "results": [ 00:22:36.940 { 00:22:36.940 "job": "NVMe0n1", 00:22:36.940 "core_mask": "0x1", 00:22:36.940 "workload": "verify", 00:22:36.940 "status": "finished", 00:22:36.940 "verify_range": { 00:22:36.940 "start": 0, 00:22:36.940 "length": 16384 00:22:36.940 }, 00:22:36.940 "queue_depth": 128, 00:22:36.940 "io_size": 4096, 00:22:36.940 "runtime": 15.008099, 00:22:36.941 "iops": 8560.111443827762, 00:22:36.941 "mibps": 33.437935327452195, 00:22:36.941 "io_failed": 3557, 00:22:36.941 "io_timeout": 0, 00:22:36.941 "avg_latency_us": 14524.500933731591, 00:22:36.941 "min_latency_us": 549.4232931726908, 00:22:36.941 "max_latency_us": 16107.643373493976 00:22:36.941 } 00:22:36.941 ], 00:22:36.941 "core_count": 1 00:22:36.941 } 00:22:36.941 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 81798 00:22:36.941 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 81798 ']' 00:22:36.941 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 81798 00:22:36.941 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:36.941 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.941 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81798 00:22:36.941 killing process with pid 81798 00:22:36.941 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:36.941 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:36.941 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81798' 00:22:36.941 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 81798 00:22:36.941 08:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 81798 00:22:37.518 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:37.518 [2024-11-17 08:22:24.631938] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:37.518 [2024-11-17 08:22:24.632079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81798 ] 00:22:37.518 [2024-11-17 08:22:24.817916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.518 [2024-11-17 08:22:24.955285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.518 [2024-11-17 08:22:25.196174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:37.518 Running I/O for 15 seconds... 00:22:37.518 9488.00 IOPS, 37.06 MiB/s [2024-11-17T08:22:42.530Z] [2024-11-17 08:22:27.196046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.518 [2024-11-17 08:22:27.196911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.518 [2024-11-17 08:22:27.196931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.196947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.196969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.196985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.197101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.197141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.197177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.197213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.197248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.197284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.197323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.197359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.197964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.197985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.519 [2024-11-17 08:22:27.198001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.198022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.198044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.198081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.198099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.198121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.198137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.198158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.198175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.198196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.519 [2024-11-17 08:22:27.198212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.519 [2024-11-17 08:22:27.198233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.198249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.198288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.198326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.198976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.198992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.199037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.199075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.520 [2024-11-17 08:22:27.199111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.199149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.199187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.199223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.199263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.199300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.199342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.199379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.199416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.199464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.199501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.199545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.520 [2024-11-17 08:22:27.199566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.520 [2024-11-17 08:22:27.199582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.199602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.521 [2024-11-17 08:22:27.199619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.199654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.521 [2024-11-17 08:22:27.199670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.199690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.521 [2024-11-17 08:22:27.199706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.199727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.521 [2024-11-17 08:22:27.199743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.199763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.199779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.199799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.199815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.199836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.199852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.199872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.199888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.199909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.199924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.199949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.199965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.199985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.200001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.200042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.200078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.200115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.200150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.200187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.200224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.200262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.521 [2024-11-17 08:22:27.200298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.521 [2024-11-17 08:22:27.200344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.521 [2024-11-17 08:22:27.200381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.521 [2024-11-17 08:22:27.200424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.521 [2024-11-17 08:22:27.200469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.521 [2024-11-17 08:22:27.200511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.521 [2024-11-17 08:22:27.200547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.521 [2024-11-17 08:22:27.200588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.200625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.200663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.200699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.521 [2024-11-17 08:22:27.200719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.521 [2024-11-17 08:22:27.200735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.200756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:27.200771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.200792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:27.200808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.200828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:27.200844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.200867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:27.200883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.200903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:27.200919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.200940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:27.200956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.200981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:27.200997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.201019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:27.201035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.201056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:27.201072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.201093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:22:37.522 [2024-11-17 08:22:27.201115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.522 [2024-11-17 08:22:27.201134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.522 [2024-11-17 08:22:27.201148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85496 len:8 PRP1 0x0 PRP2 0x0 00:22:37.522 [2024-11-17 08:22:27.201175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.201546] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:37.522 [2024-11-17 08:22:27.201618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.522 [2024-11-17 08:22:27.201645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.201665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.522 [2024-11-17 08:22:27.201682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.201699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.522 [2024-11-17 08:22:27.201715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.201733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.522 [2024-11-17 08:22:27.201749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:27.201774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:37.522 [2024-11-17 08:22:27.201844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:37.522 [2024-11-17 08:22:27.204827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:37.522 [2024-11-17 08:22:27.227260] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:37.522 9362.00 IOPS, 36.57 MiB/s [2024-11-17T08:22:42.534Z] 9423.00 IOPS, 36.81 MiB/s [2024-11-17T08:22:42.534Z] 9437.75 IOPS, 36.87 MiB/s [2024-11-17T08:22:42.534Z] [2024-11-17 08:22:30.741538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:30.741618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:30.741680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:30.741697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:30.741716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:30.741732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:30.741750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:30.741766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:30.741784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:30.741800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:30.741817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:30.741833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:30.741850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:30.741866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:30.741884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:30.741899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:30.741917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:30.741932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:30.741951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:30.741967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:30.741984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:30.741999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:30.742016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.522 [2024-11-17 08:22:30.742032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.522 [2024-11-17 08:22:30.742049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.522 [2024-11-17 08:22:30.742064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.523 [2024-11-17 08:22:30.742109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.523 [2024-11-17 08:22:30.742144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.523 [2024-11-17 08:22:30.742178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.523 [2024-11-17 08:22:30.742210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.523 [2024-11-17 08:22:30.742245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.523 [2024-11-17 08:22:30.742277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.523 [2024-11-17 08:22:30.742310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.742967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.523 [2024-11-17 08:22:30.742983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.743002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.523 [2024-11-17 08:22:30.743018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.523 [2024-11-17 08:22:30.743043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.743060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.743096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.743130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.743164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.743197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.743231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.743264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.743936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.743970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.743988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.744003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.744021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.744037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.744054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.744070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.744087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.744103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.744120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.744136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.744153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.744169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.744186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.524 [2024-11-17 08:22:30.744202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.744219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.524 [2024-11-17 08:22:30.744235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.524 [2024-11-17 08:22:30.744252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.525 [2024-11-17 08:22:30.744822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.744855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.744888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.744922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.744955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.744973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.745006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.745053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.745088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.745122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.745156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.745190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.745224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.745265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.745300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.745335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.525 [2024-11-17 08:22:30.745368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(6) to be set 00:22:37.525 [2024-11-17 08:22:30.745407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.525 [2024-11-17 08:22:30.745421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.525 [2024-11-17 08:22:30.745436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98656 len:8 PRP1 0x0 PRP2 0x0 00:22:37.525 [2024-11-17 08:22:30.745462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.525 [2024-11-17 08:22:30.745503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.525 [2024-11-17 08:22:30.745516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99264 len:8 PRP1 0x0 PRP2 0x0 00:22:37.525 [2024-11-17 08:22:30.745533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.525 [2024-11-17 08:22:30.745549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.745562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.745575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99272 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.745591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.745606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.745619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.745632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99280 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.745649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.745664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.745676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.745688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99288 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.745704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.745726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.745738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.745750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99296 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.745767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.745782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.745794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.745806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99304 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.745822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.745837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.745849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.745862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99312 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.745878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.745893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.745904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.745917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99320 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.745933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.745948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.745961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.745974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99328 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.745990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.746005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.746017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.746029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99336 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.746044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.746060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.746072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.746085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99344 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.746101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.746117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.746128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.746141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99352 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.746162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.746178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.746189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.746202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99360 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.746217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.746233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.746244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.746257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98664 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.746272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.746287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.746300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.746313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98672 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.746329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.746344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.746356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.746369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98680 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.746385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.746400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.746422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.746435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98688 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.746461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.746477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.746489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.746502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98696 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.746518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.746533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.746550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.746563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98704 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.746580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.746596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.746608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.746627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98712 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.746643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.746659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.526 [2024-11-17 08:22:30.746671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.526 [2024-11-17 08:22:30.746684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98720 len:8 PRP1 0x0 PRP2 0x0 00:22:37.526 [2024-11-17 08:22:30.746700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.526 [2024-11-17 08:22:30.747006] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:22:37.526 [2024-11-17 08:22:30.747074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.526 [2024-11-17 08:22:30.747095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:30.747115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.527 [2024-11-17 08:22:30.747131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:30.747149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.527 [2024-11-17 08:22:30.747165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:30.747182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.527 [2024-11-17 08:22:30.747198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:30.747216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:37.527 [2024-11-17 08:22:30.747275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:37.527 [2024-11-17 08:22:30.750243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:37.527 [2024-11-17 08:22:30.773025] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:37.527 9357.80 IOPS, 36.55 MiB/s [2024-11-17T08:22:42.539Z] 9356.83 IOPS, 36.55 MiB/s [2024-11-17T08:22:42.539Z] 9368.43 IOPS, 36.60 MiB/s [2024-11-17T08:22:42.539Z] 9390.25 IOPS, 36.68 MiB/s [2024-11-17T08:22:42.539Z] 9408.56 IOPS, 36.75 MiB/s [2024-11-17T08:22:42.539Z] [2024-11-17 08:22:35.219175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.527 [2024-11-17 08:22:35.219795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.219829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.219861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.219902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.219938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.219971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.219989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.220004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.220023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.220038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.220056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.220072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.220089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.220105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.220122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.220138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.220157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.220172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.220189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.220204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.220222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.220237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.220254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.220270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.220287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.220322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.220340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.527 [2024-11-17 08:22:35.220357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.527 [2024-11-17 08:22:35.220374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.220390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.220425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.220466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.220502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.220536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.220571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.220606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.220639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.528 [2024-11-17 08:22:35.220674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.528 [2024-11-17 08:22:35.220709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.528 [2024-11-17 08:22:35.220742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.528 [2024-11-17 08:22:35.220780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.528 [2024-11-17 08:22:35.220813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.528 [2024-11-17 08:22:35.220847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.528 [2024-11-17 08:22:35.220880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.528 [2024-11-17 08:22:35.220913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.528 [2024-11-17 08:22:35.220946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.528 [2024-11-17 08:22:35.220980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.220998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.221013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.221031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.221047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.221071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.221087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.221105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.221121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.221139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.221155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.221172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.221188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.221211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.221226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.221243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.221259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.528 [2024-11-17 08:22:35.221276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.528 [2024-11-17 08:22:35.221292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.221326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.221360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.221394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.221426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.221485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.221519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.221554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.529 [2024-11-17 08:22:35.221588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.529 [2024-11-17 08:22:35.221622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.529 [2024-11-17 08:22:35.221662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.529 [2024-11-17 08:22:35.221708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.529 [2024-11-17 08:22:35.221743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.529 [2024-11-17 08:22:35.221778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.529 [2024-11-17 08:22:35.221812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.529 [2024-11-17 08:22:35.221848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.221883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.221918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.221952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.221971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.221987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.222021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.222054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.222088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.222129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.222163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.222200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.222234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.222268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.222302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.222335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.222370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.529 [2024-11-17 08:22:35.222412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.529 [2024-11-17 08:22:35.222431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.222457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.222492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.222527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.222567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.222602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.222637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.222672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.222706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.530 [2024-11-17 08:22:35.222740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.530 [2024-11-17 08:22:35.222780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.530 [2024-11-17 08:22:35.222814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.530 [2024-11-17 08:22:35.222849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.530 [2024-11-17 08:22:35.222885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.530 [2024-11-17 08:22:35.222919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.530 [2024-11-17 08:22:35.222952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.222970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.530 [2024-11-17 08:22:35.222986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.530 [2024-11-17 08:22:35.223522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.530 [2024-11-17 08:22:35.223539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:22:37.530 [2024-11-17 08:22:35.223561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.530 [2024-11-17 08:22:35.223576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.530 [2024-11-17 08:22:35.223591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51664 len:8 PRP1 0x0 PRP2 0x0 00:22:37.531 [2024-11-17 08:22:35.223608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.223627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.531 [2024-11-17 08:22:35.223640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.531 [2024-11-17 08:22:35.223653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51992 len:8 PRP1 0x0 PRP2 0x0 00:22:37.531 [2024-11-17 08:22:35.223669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.223684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.531 [2024-11-17 08:22:35.223708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.531 [2024-11-17 08:22:35.223720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52000 len:8 PRP1 0x0 PRP2 0x0 00:22:37.531 [2024-11-17 08:22:35.223736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.223751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.531 [2024-11-17 08:22:35.223762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.531 [2024-11-17 08:22:35.223775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52008 len:8 PRP1 0x0 PRP2 0x0 00:22:37.531 [2024-11-17 08:22:35.223806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.223821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.531 [2024-11-17 08:22:35.223833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.531 [2024-11-17 08:22:35.223848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52016 len:8 PRP1 0x0 PRP2 0x0 00:22:37.531 [2024-11-17 08:22:35.223866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.223882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.531 [2024-11-17 08:22:35.223894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.531 [2024-11-17 08:22:35.223907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52024 len:8 PRP1 0x0 PRP2 0x0 00:22:37.531 [2024-11-17 08:22:35.223922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.223957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.531 [2024-11-17 08:22:35.223969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.531 [2024-11-17 08:22:35.223982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52032 len:8 PRP1 0x0 PRP2 0x0 00:22:37.531 [2024-11-17 08:22:35.223998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.224022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.531 [2024-11-17 08:22:35.224034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.531 [2024-11-17 08:22:35.224047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52040 len:8 PRP1 0x0 PRP2 0x0 00:22:37.531 [2024-11-17 08:22:35.224064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.224079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.531 [2024-11-17 08:22:35.224090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.531 [2024-11-17 08:22:35.224103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52048 len:8 PRP1 0x0 PRP2 0x0 00:22:37.531 [2024-11-17 08:22:35.224119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.224408] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:22:37.531 [2024-11-17 08:22:35.224507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.531 [2024-11-17 08:22:35.224530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.224549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.531 [2024-11-17 08:22:35.224566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.224583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.531 [2024-11-17 08:22:35.224600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.224618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.531 [2024-11-17 08:22:35.224634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.531 [2024-11-17 08:22:35.224650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:37.531 [2024-11-17 08:22:35.224716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:37.531 [2024-11-17 08:22:35.227719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:37.531 [2024-11-17 08:22:35.251729] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:37.531 9167.40 IOPS, 35.81 MiB/s [2024-11-17T08:22:42.543Z] 8971.27 IOPS, 35.04 MiB/s [2024-11-17T08:22:42.543Z] 8807.25 IOPS, 34.40 MiB/s [2024-11-17T08:22:42.543Z] 8669.00 IOPS, 33.86 MiB/s [2024-11-17T08:22:42.543Z] 8558.86 IOPS, 33.43 MiB/s [2024-11-17T08:22:42.543Z] 8557.60 IOPS, 33.43 MiB/s 00:22:37.531 Latency(us) 00:22:37.531 [2024-11-17T08:22:42.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.531 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:37.531 Verification LBA range: start 0x0 length 0x4000 00:22:37.531 NVMe0n1 : 15.01 8560.11 33.44 237.01 0.00 14524.50 549.42 16107.64 00:22:37.531 [2024-11-17T08:22:42.543Z] =================================================================================================================== 00:22:37.531 [2024-11-17T08:22:42.543Z] Total : 8560.11 33.44 237.01 0.00 14524.50 549.42 16107.64 00:22:37.531 Received shutdown signal, test time was about 15.000000 seconds 00:22:37.531 00:22:37.531 Latency(us) 00:22:37.531 [2024-11-17T08:22:42.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.531 [2024-11-17T08:22:42.543Z] =================================================================================================================== 00:22:37.531 [2024-11-17T08:22:42.543Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:37.531 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:37.531 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:37.531 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:37.531 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=82001 00:22:37.531 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 82001 /var/tmp/bdevperf.sock 00:22:37.531 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 82001 ']' 00:22:37.531 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:37.531 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.531 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.531 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.531 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.531 08:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:38.471 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.471 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:38.471 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:38.471 [2024-11-17 08:22:43.481953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:38.731 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:38.731 [2024-11-17 08:22:43.674079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:22:38.731 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:39.000 NVMe0n1 00:22:39.000 08:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:39.259 00:22:39.259 08:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:39.519 00:22:39.519 08:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:39.519 08:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:39.778 08:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:40.037 08:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:43.327 08:22:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:43.327 08:22:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:43.327 08:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=82069 00:22:43.327 08:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:43.327 08:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 82069 00:22:44.264 { 00:22:44.264 "results": [ 00:22:44.264 { 00:22:44.264 "job": "NVMe0n1", 00:22:44.264 "core_mask": "0x1", 00:22:44.264 "workload": "verify", 00:22:44.264 "status": "finished", 00:22:44.264 "verify_range": { 00:22:44.264 "start": 0, 00:22:44.264 "length": 16384 00:22:44.264 }, 00:22:44.264 "queue_depth": 128, 00:22:44.264 "io_size": 4096, 00:22:44.264 "runtime": 1.005366, 00:22:44.264 "iops": 6966.617132467181, 00:22:44.264 "mibps": 27.213348173699927, 00:22:44.264 "io_failed": 0, 00:22:44.264 "io_timeout": 0, 00:22:44.264 "avg_latency_us": 18315.81143259503, 00:22:44.264 "min_latency_us": 2118.734136546185, 00:22:44.264 "max_latency_us": 16949.87309236948 00:22:44.264 } 00:22:44.264 ], 00:22:44.264 "core_count": 1 00:22:44.264 } 00:22:44.264 08:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:44.264 [2024-11-17 08:22:42.504450] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:44.264 [2024-11-17 08:22:42.504646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82001 ] 00:22:44.264 [2024-11-17 08:22:42.693674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.264 [2024-11-17 08:22:42.831562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.264 [2024-11-17 08:22:43.080654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:44.264 [2024-11-17 08:22:44.874735] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:44.264 [2024-11-17 08:22:44.874882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.264 [2024-11-17 08:22:44.874917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.264 [2024-11-17 08:22:44.874942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.264 [2024-11-17 08:22:44.874966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.264 [2024-11-17 08:22:44.874984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.264 [2024-11-17 08:22:44.875005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.264 [2024-11-17 08:22:44.875023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.264 [2024-11-17 08:22:44.875043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.264 [2024-11-17 08:22:44.875068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:44.264 [2024-11-17 08:22:44.875151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:44.264 [2024-11-17 08:22:44.875191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:44.265 [2024-11-17 08:22:44.883801] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:44.265 Running I/O for 1 seconds... 00:22:44.265 6876.00 IOPS, 26.86 MiB/s 00:22:44.265 Latency(us) 00:22:44.265 [2024-11-17T08:22:49.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.265 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:44.265 Verification LBA range: start 0x0 length 0x4000 00:22:44.265 NVMe0n1 : 1.01 6966.62 27.21 0.00 0.00 18315.81 2118.73 16949.87 00:22:44.265 [2024-11-17T08:22:49.277Z] =================================================================================================================== 00:22:44.265 [2024-11-17T08:22:49.277Z] Total : 6966.62 27.21 0.00 0.00 18315.81 2118.73 16949.87 00:22:44.265 08:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:44.265 08:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:44.523 08:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:44.782 08:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:44.782 08:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:45.040 08:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:45.298 08:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 82001 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 82001 ']' 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 82001 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82001 00:22:48.590 killing process with pid 82001 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82001' 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 82001 00:22:48.590 08:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 82001 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:49.971 rmmod nvme_tcp 00:22:49.971 rmmod nvme_fabrics 00:22:49.971 rmmod nvme_keyring 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 81740 ']' 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 81740 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 81740 ']' 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 81740 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81740 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:49.971 killing process with pid 81740 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81740' 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 81740 00:22:49.971 08:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 81740 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:51.351 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:22:51.610 00:22:51.610 real 0m35.326s 00:22:51.610 user 2m10.923s 00:22:51.610 sys 0m7.040s 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:51.610 ************************************ 00:22:51.610 END TEST nvmf_failover 00:22:51.610 ************************************ 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.610 ************************************ 00:22:51.610 START TEST nvmf_host_discovery 00:22:51.610 ************************************ 00:22:51.610 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:51.870 * Looking for test storage... 00:22:51.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.870 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:51.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.871 --rc genhtml_branch_coverage=1 00:22:51.871 --rc genhtml_function_coverage=1 00:22:51.871 --rc genhtml_legend=1 00:22:51.871 --rc geninfo_all_blocks=1 00:22:51.871 --rc geninfo_unexecuted_blocks=1 00:22:51.871 00:22:51.871 ' 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:51.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.871 --rc genhtml_branch_coverage=1 00:22:51.871 --rc genhtml_function_coverage=1 00:22:51.871 --rc genhtml_legend=1 00:22:51.871 --rc geninfo_all_blocks=1 00:22:51.871 --rc geninfo_unexecuted_blocks=1 00:22:51.871 00:22:51.871 ' 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:51.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.871 --rc genhtml_branch_coverage=1 00:22:51.871 --rc genhtml_function_coverage=1 00:22:51.871 --rc genhtml_legend=1 00:22:51.871 --rc geninfo_all_blocks=1 00:22:51.871 --rc geninfo_unexecuted_blocks=1 00:22:51.871 00:22:51.871 ' 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:51.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.871 --rc genhtml_branch_coverage=1 00:22:51.871 --rc genhtml_function_coverage=1 00:22:51.871 --rc genhtml_legend=1 00:22:51.871 --rc geninfo_all_blocks=1 00:22:51.871 --rc geninfo_unexecuted_blocks=1 00:22:51.871 00:22:51.871 ' 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.871 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.871 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:52.132 Cannot find device "nvmf_init_br" 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:52.132 Cannot find device "nvmf_init_br2" 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:52.132 Cannot find device "nvmf_tgt_br" 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:52.132 Cannot find device "nvmf_tgt_br2" 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:52.132 Cannot find device "nvmf_init_br" 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:22:52.132 08:22:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:52.132 Cannot find device "nvmf_init_br2" 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:52.132 Cannot find device "nvmf_tgt_br" 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:52.132 Cannot find device "nvmf_tgt_br2" 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:52.132 Cannot find device "nvmf_br" 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:52.132 Cannot find device "nvmf_init_if" 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:52.132 Cannot find device "nvmf_init_if2" 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:52.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:52.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:52.132 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:52.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:52.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:22:52.393 00:22:52.393 --- 10.0.0.3 ping statistics --- 00:22:52.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.393 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:52.393 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:52.393 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:22:52.393 00:22:52.393 --- 10.0.0.4 ping statistics --- 00:22:52.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.393 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:52.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:22:52.393 00:22:52.393 --- 10.0.0.1 ping statistics --- 00:22:52.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.393 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:52.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:22:52.393 00:22:52.393 --- 10.0.0.2 ping statistics --- 00:22:52.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.393 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.393 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.653 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=82418 00:22:52.653 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:52.653 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 82418 00:22:52.653 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 82418 ']' 00:22:52.653 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.653 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.653 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.653 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.653 08:22:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.653 [2024-11-17 08:22:57.514765] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:52.653 [2024-11-17 08:22:57.514882] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.912 [2024-11-17 08:22:57.701675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.912 [2024-11-17 08:22:57.805104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.912 [2024-11-17 08:22:57.805156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.912 [2024-11-17 08:22:57.805172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.912 [2024-11-17 08:22:57.805191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.912 [2024-11-17 08:22:57.805203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.912 [2024-11-17 08:22:57.806262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.172 [2024-11-17 08:22:57.991292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.433 [2024-11-17 08:22:58.393621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.433 [2024-11-17 08:22:58.405803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.433 null0 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.433 null1 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.433 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.693 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.693 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=82451 00:22:53.693 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:53.693 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 82451 /tmp/host.sock 00:22:53.693 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 82451 ']' 00:22:53.693 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:53.693 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.693 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:53.693 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:53.693 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.693 08:22:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.693 [2024-11-17 08:22:58.561965] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:53.693 [2024-11-17 08:22:58.562105] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82451 ] 00:22:53.952 [2024-11-17 08:22:58.747841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.952 [2024-11-17 08:22:58.877484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.211 [2024-11-17 08:22:59.116990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:54.470 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.471 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.471 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.471 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:54.471 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:54.471 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.471 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.730 [2024-11-17 08:22:59.704105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:54.730 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:54.731 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:54.731 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.731 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.731 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:54.731 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:54.731 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:54.991 08:22:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:55.559 [2024-11-17 08:23:00.394360] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:55.559 [2024-11-17 08:23:00.394425] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:55.559 [2024-11-17 08:23:00.394519] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:55.559 [2024-11-17 08:23:00.400420] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:22:55.559 [2024-11-17 08:23:00.454951] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:22:55.559 [2024-11-17 08:23:00.456609] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b280:1 started. 00:22:55.559 [2024-11-17 08:23:00.459058] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:55.559 [2024-11-17 08:23:00.459092] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:55.559 [2024-11-17 08:23:00.463301] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b280 was disconnected and freed. delete nvme_qpair. 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.129 08:23:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.129 [2024-11-17 08:23:01.086170] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:56.129 [2024-11-17 08:23:01.092455] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:22:56.129 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.418 [2024-11-17 08:23:01.196092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:56.418 [2024-11-17 08:23:01.196270] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:56.418 [2024-11-17 08:23:01.196308] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:56.418 [2024-11-17 08:23:01.202302] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.418 [2024-11-17 08:23:01.267112] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:22:56.418 [2024-11-17 08:23:01.267179] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:56.418 [2024-11-17 08:23:01.267196] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:56.418 [2024-11-17 08:23:01.267206] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:56.418 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.419 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.724 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.725 [2024-11-17 08:23:01.408869] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:56.725 [2024-11-17 08:23:01.408917] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:56.725 [2024-11-17 08:23:01.413588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.725 [2024-11-17 08:23:01.413635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.725 [2024-11-17 08:23:01.413653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.725 [2024-11-17 08:23:01.413665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.725 [2024-11-17 08:23:01.413679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.725 [2024-11-17 08:23:01.413690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.725 [2024-11-17 08:23:01.413703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.725 [2024-11-17 08:23:01.413716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.725 [2024-11-17 08:23:01.413729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:56.725 [2024-11-17 08:23:01.414861] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:22:56.725 [2024-11-17 08:23:01.414897] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:56.725 [2024-11-17 08:23:01.414986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.725 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.726 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.993 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.993 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:56.993 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:56.993 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:56.993 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:56.993 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:56.993 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.993 08:23:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.930 [2024-11-17 08:23:02.760966] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:57.930 [2024-11-17 08:23:02.761002] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:57.930 [2024-11-17 08:23:02.761047] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:57.930 [2024-11-17 08:23:02.767033] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:22:57.930 [2024-11-17 08:23:02.825523] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:22:57.930 [2024-11-17 08:23:02.826875] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500002c680:1 started. 00:22:57.930 [2024-11-17 08:23:02.829745] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:57.930 [2024-11-17 08:23:02.829817] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:57.930 [2024-11-17 08:23:02.831906] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500002c680 was disconnected and freed. delete nvme_qpair. 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.930 request: 00:22:57.930 { 00:22:57.930 "name": "nvme", 00:22:57.930 "trtype": "tcp", 00:22:57.930 "traddr": "10.0.0.3", 00:22:57.930 "adrfam": "ipv4", 00:22:57.930 "trsvcid": "8009", 00:22:57.930 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:57.930 "wait_for_attach": true, 00:22:57.930 "method": "bdev_nvme_start_discovery", 00:22:57.930 "req_id": 1 00:22:57.930 } 00:22:57.930 Got JSON-RPC error response 00:22:57.930 response: 00:22:57.930 { 00:22:57.930 "code": -17, 00:22:57.930 "message": "File exists" 00:22:57.930 } 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.930 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.190 request: 00:22:58.190 { 00:22:58.190 "name": "nvme_second", 00:22:58.190 "trtype": "tcp", 00:22:58.190 "traddr": "10.0.0.3", 00:22:58.190 "adrfam": "ipv4", 00:22:58.190 "trsvcid": "8009", 00:22:58.190 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:58.190 "wait_for_attach": true, 00:22:58.190 "method": "bdev_nvme_start_discovery", 00:22:58.190 "req_id": 1 00:22:58.190 } 00:22:58.190 Got JSON-RPC error response 00:22:58.190 response: 00:22:58.190 { 00:22:58.190 "code": -17, 00:22:58.190 "message": "File exists" 00:22:58.190 } 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.190 08:23:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:58.190 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.191 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:58.191 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.191 08:23:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.127 [2024-11-17 08:23:04.096181] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.127 [2024-11-17 08:23:04.096235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c900 with addr=10.0.0.3, port=8010 00:22:59.127 [2024-11-17 08:23:04.096297] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:59.127 [2024-11-17 08:23:04.096312] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:59.127 [2024-11-17 08:23:04.096327] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:23:00.505 [2024-11-17 08:23:05.094622] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.505 [2024-11-17 08:23:05.094674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002cb80 with addr=10.0.0.3, port=8010 00:23:00.505 [2024-11-17 08:23:05.094743] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:00.505 [2024-11-17 08:23:05.094758] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:00.505 [2024-11-17 08:23:05.094772] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:23:01.443 [2024-11-17 08:23:06.092762] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:23:01.443 request: 00:23:01.443 { 00:23:01.443 "name": "nvme_second", 00:23:01.443 "trtype": "tcp", 00:23:01.443 "traddr": "10.0.0.3", 00:23:01.443 "adrfam": "ipv4", 00:23:01.443 "trsvcid": "8010", 00:23:01.443 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:01.443 "wait_for_attach": false, 00:23:01.443 "attach_timeout_ms": 3000, 00:23:01.443 "method": "bdev_nvme_start_discovery", 00:23:01.443 "req_id": 1 00:23:01.443 } 00:23:01.443 Got JSON-RPC error response 00:23:01.443 response: 00:23:01.443 { 00:23:01.443 "code": -110, 00:23:01.443 "message": "Connection timed out" 00:23:01.443 } 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 82451 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:01.443 rmmod nvme_tcp 00:23:01.443 rmmod nvme_fabrics 00:23:01.443 rmmod nvme_keyring 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 82418 ']' 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 82418 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 82418 ']' 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 82418 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82418 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:01.443 killing process with pid 82418 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82418' 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 82418 00:23:01.443 08:23:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 82418 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:23:02.822 00:23:02.822 real 0m11.131s 00:23:02.822 user 0m19.651s 00:23:02.822 sys 0m2.968s 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.822 ************************************ 00:23:02.822 END TEST nvmf_host_discovery 00:23:02.822 ************************************ 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.822 ************************************ 00:23:02.822 START TEST nvmf_host_multipath_status 00:23:02.822 ************************************ 00:23:02.822 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:03.082 * Looking for test storage... 00:23:03.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:03.082 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:03.082 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:03.082 08:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:03.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.082 --rc genhtml_branch_coverage=1 00:23:03.082 --rc genhtml_function_coverage=1 00:23:03.082 --rc genhtml_legend=1 00:23:03.082 --rc geninfo_all_blocks=1 00:23:03.082 --rc geninfo_unexecuted_blocks=1 00:23:03.082 00:23:03.082 ' 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:03.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.082 --rc genhtml_branch_coverage=1 00:23:03.082 --rc genhtml_function_coverage=1 00:23:03.082 --rc genhtml_legend=1 00:23:03.082 --rc geninfo_all_blocks=1 00:23:03.082 --rc geninfo_unexecuted_blocks=1 00:23:03.082 00:23:03.082 ' 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:03.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.082 --rc genhtml_branch_coverage=1 00:23:03.082 --rc genhtml_function_coverage=1 00:23:03.082 --rc genhtml_legend=1 00:23:03.082 --rc geninfo_all_blocks=1 00:23:03.082 --rc geninfo_unexecuted_blocks=1 00:23:03.082 00:23:03.082 ' 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:03.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.082 --rc genhtml_branch_coverage=1 00:23:03.082 --rc genhtml_function_coverage=1 00:23:03.082 --rc genhtml_legend=1 00:23:03.082 --rc geninfo_all_blocks=1 00:23:03.082 --rc geninfo_unexecuted_blocks=1 00:23:03.082 00:23:03.082 ' 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:03.082 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:03.342 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.342 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.342 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.342 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:03.343 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:03.343 Cannot find device "nvmf_init_br" 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:03.343 Cannot find device "nvmf_init_br2" 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:03.343 Cannot find device "nvmf_tgt_br" 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:03.343 Cannot find device "nvmf_tgt_br2" 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:03.343 Cannot find device "nvmf_init_br" 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:03.343 Cannot find device "nvmf_init_br2" 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:03.343 Cannot find device "nvmf_tgt_br" 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:03.343 Cannot find device "nvmf_tgt_br2" 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:03.343 Cannot find device "nvmf_br" 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:03.343 Cannot find device "nvmf_init_if" 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:03.343 Cannot find device "nvmf_init_if2" 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:03.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:03.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:03.343 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:03.603 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:03.603 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.144 ms 00:23:03.603 00:23:03.603 --- 10.0.0.3 ping statistics --- 00:23:03.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.603 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:03.603 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:03.603 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:23:03.603 00:23:03.603 --- 10.0.0.4 ping statistics --- 00:23:03.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.603 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:23:03.603 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:03.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:23:03.862 00:23:03.862 --- 10.0.0.1 ping statistics --- 00:23:03.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.862 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:03.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:23:03.862 00:23:03.862 --- 10.0.0.2 ping statistics --- 00:23:03.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.862 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.862 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:03.863 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:03.863 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=82973 00:23:03.863 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 82973 00:23:03.863 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 82973 ']' 00:23:03.863 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.863 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.863 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.863 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.863 08:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:03.863 [2024-11-17 08:23:08.780065] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:03.863 [2024-11-17 08:23:08.780188] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.122 [2024-11-17 08:23:08.969416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:04.122 [2024-11-17 08:23:09.100896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.122 [2024-11-17 08:23:09.100954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.122 [2024-11-17 08:23:09.100971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.122 [2024-11-17 08:23:09.101009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.122 [2024-11-17 08:23:09.101023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.122 [2024-11-17 08:23:09.103554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.122 [2024-11-17 08:23:09.103588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.380 [2024-11-17 08:23:09.331423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:04.639 08:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.639 08:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:04.639 08:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:04.639 08:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:04.639 08:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:04.898 08:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.898 08:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=82973 00:23:04.898 08:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:04.898 [2024-11-17 08:23:09.864039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.898 08:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:05.466 Malloc0 00:23:05.466 08:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:05.466 08:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:05.725 08:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:05.983 [2024-11-17 08:23:10.789458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:05.984 08:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:06.242 [2024-11-17 08:23:11.001258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:06.242 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=83029 00:23:06.242 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:06.242 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.242 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 83029 /var/tmp/bdevperf.sock 00:23:06.242 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 83029 ']' 00:23:06.242 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.242 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.242 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.242 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.242 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:07.180 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.180 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:07.180 08:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:07.180 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:07.440 Nvme0n1 00:23:07.440 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:08.009 Nvme0n1 00:23:08.009 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:08.009 08:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:09.916 08:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:09.916 08:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:23:10.175 08:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:10.175 08:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:11.554 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:11.554 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:11.554 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.554 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:11.554 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.554 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:11.554 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.554 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:11.813 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:11.813 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:11.813 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.813 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:11.813 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.813 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:11.813 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.813 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:12.071 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.071 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:12.071 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.071 08:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:12.331 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.331 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:12.331 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.331 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:12.590 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.590 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:12.590 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:12.590 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:12.849 08:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:14.227 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:14.227 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:14.227 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.227 08:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:14.227 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.227 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:14.227 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.227 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:14.227 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.227 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:14.227 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.227 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:14.486 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.486 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:14.486 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.486 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.744 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.744 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:14.744 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:14.744 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.004 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.004 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:15.004 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:15.004 08:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.263 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.263 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:15.263 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:15.263 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:23:15.523 08:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:16.461 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:16.461 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:16.461 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.461 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:16.720 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.720 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:16.720 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:16.720 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.979 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:16.979 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:16.979 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:16.979 08:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:17.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:17.238 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.499 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.499 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:17.499 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.499 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:17.499 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.499 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:17.499 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.499 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:17.758 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.758 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:17.758 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:18.018 08:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:18.277 08:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:19.214 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:19.214 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:19.214 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.214 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:19.474 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.474 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:19.474 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.474 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:19.733 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:19.733 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:19.733 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.733 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:19.733 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.733 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:19.733 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:19.733 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.002 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.002 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:20.002 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:20.002 08:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.260 08:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.260 08:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:20.260 08:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.261 08:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:20.519 08:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:20.519 08:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:20.519 08:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:20.777 08:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:20.777 08:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:22.162 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:22.162 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:22.162 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.162 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:22.162 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.162 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:22.162 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:22.162 08:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.162 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.162 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:22.162 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:22.162 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.421 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.421 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:22.421 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.421 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:22.744 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.744 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:22.744 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.744 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:23.020 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:23.020 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:23.020 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.020 08:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:23.020 08:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:23.020 08:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:23.020 08:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:23.278 08:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:23.537 08:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:24.473 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:24.473 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:24.473 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.473 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:24.731 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:24.731 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:24.731 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.731 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:24.989 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.989 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:24.989 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:24.989 08:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.248 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.248 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:25.248 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.248 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:25.248 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.248 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:25.248 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.248 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:25.507 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:25.507 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:25.507 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.507 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:25.766 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.766 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:26.025 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:26.025 08:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:23:26.284 08:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:26.543 08:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:27.481 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:27.481 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:27.481 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.481 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:27.741 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.741 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:27.741 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.741 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:28.000 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.000 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:28.000 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.000 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:28.000 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.000 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:28.001 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:28.001 08:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.260 08:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.260 08:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:28.260 08:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:28.260 08:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.519 08:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.519 08:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:28.519 08:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.519 08:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:28.778 08:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.778 08:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:28.778 08:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:29.038 08:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:29.038 08:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:30.416 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:30.416 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:30.416 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.416 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:30.416 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.416 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:30.416 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.416 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:30.676 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.676 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:30.676 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.676 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:30.676 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.676 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:30.676 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.677 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:30.935 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.935 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:30.936 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:30.936 08:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.194 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.194 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:31.194 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.194 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:31.453 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.453 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:31.453 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:31.453 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:23:31.712 08:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:33.094 08:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:33.094 08:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:33.094 08:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.094 08:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:33.094 08:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.094 08:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:33.094 08:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.094 08:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:33.094 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.094 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:33.094 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:33.094 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.353 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.353 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:33.353 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.353 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:33.611 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.611 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:33.611 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.611 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:33.869 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.870 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:33.870 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.870 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:34.128 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.128 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:34.128 08:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:34.128 08:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:34.386 08:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:35.323 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:35.323 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:35.323 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.323 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:35.582 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.582 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:35.582 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:35.582 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.841 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:35.841 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:35.841 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.841 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:36.100 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.100 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:36.100 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.100 08:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:36.359 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.359 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:36.359 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:36.359 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.359 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.359 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:36.359 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.360 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:36.619 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:36.619 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 83029 00:23:36.619 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 83029 ']' 00:23:36.619 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 83029 00:23:36.619 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:36.619 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.619 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83029 00:23:36.619 killing process with pid 83029 00:23:36.619 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:36.619 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:36.619 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83029' 00:23:36.619 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 83029 00:23:36.619 08:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 83029 00:23:36.619 { 00:23:36.619 "results": [ 00:23:36.619 { 00:23:36.619 "job": "Nvme0n1", 00:23:36.619 "core_mask": "0x4", 00:23:36.619 "workload": "verify", 00:23:36.619 "status": "terminated", 00:23:36.619 "verify_range": { 00:23:36.619 "start": 0, 00:23:36.619 "length": 16384 00:23:36.619 }, 00:23:36.619 "queue_depth": 128, 00:23:36.619 "io_size": 4096, 00:23:36.619 "runtime": 28.81935, 00:23:36.619 "iops": 8270.450235692339, 00:23:36.619 "mibps": 32.3064462331732, 00:23:36.619 "io_failed": 0, 00:23:36.619 "io_timeout": 0, 00:23:36.619 "avg_latency_us": 15452.637912496477, 00:23:36.619 "min_latency_us": 170.25542168674698, 00:23:36.619 "max_latency_us": 3018551.3124497994 00:23:36.619 } 00:23:36.619 ], 00:23:36.619 "core_count": 1 00:23:36.619 } 00:23:38.001 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 83029 00:23:38.001 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:38.001 [2024-11-17 08:23:11.106036] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:38.001 [2024-11-17 08:23:11.106173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83029 ] 00:23:38.001 [2024-11-17 08:23:11.292570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.001 [2024-11-17 08:23:11.398953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.001 [2024-11-17 08:23:11.590895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:38.001 Running I/O for 90 seconds... 00:23:38.001 9280.00 IOPS, 36.25 MiB/s [2024-11-17T08:23:43.013Z] 9504.00 IOPS, 37.12 MiB/s [2024-11-17T08:23:43.013Z] 9549.33 IOPS, 37.30 MiB/s [2024-11-17T08:23:43.013Z] 9552.00 IOPS, 37.31 MiB/s [2024-11-17T08:23:43.013Z] 9504.20 IOPS, 37.13 MiB/s [2024-11-17T08:23:43.013Z] 9390.17 IOPS, 36.68 MiB/s [2024-11-17T08:23:43.013Z] 9317.29 IOPS, 36.40 MiB/s [2024-11-17T08:23:43.013Z] 9300.00 IOPS, 36.33 MiB/s [2024-11-17T08:23:43.013Z] 9340.44 IOPS, 36.49 MiB/s [2024-11-17T08:23:43.013Z] 9350.00 IOPS, 36.52 MiB/s [2024-11-17T08:23:43.013Z] 9381.09 IOPS, 36.64 MiB/s [2024-11-17T08:23:43.013Z] 9393.33 IOPS, 36.69 MiB/s [2024-11-17T08:23:43.013Z] [2024-11-17 08:23:25.525629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.001 [2024-11-17 08:23:25.525700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:38.001 [2024-11-17 08:23:25.525750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.001 [2024-11-17 08:23:25.525769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:38.001 [2024-11-17 08:23:25.525791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.001 [2024-11-17 08:23:25.525807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:38.001 [2024-11-17 08:23:25.525829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.001 [2024-11-17 08:23:25.525845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:38.001 [2024-11-17 08:23:25.525866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.001 [2024-11-17 08:23:25.525881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:38.001 [2024-11-17 08:23:25.525904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.001 [2024-11-17 08:23:25.525920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:38.001 [2024-11-17 08:23:25.525941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.001 [2024-11-17 08:23:25.525956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:38.001 [2024-11-17 08:23:25.525977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.001 [2024-11-17 08:23:25.525993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:38.001 [2024-11-17 08:23:25.526014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.526030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.526087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.526124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.526162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.526200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.526238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.526276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.526316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.526967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.526990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.527006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.527058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.527095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.527133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.527171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.527210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.527247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.002 [2024-11-17 08:23:25.527285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.527353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.527391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.527433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.527482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.527520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.527576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.527644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.527685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:38.002 [2024-11-17 08:23:25.527710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.002 [2024-11-17 08:23:25.527727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.527751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.527768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.527790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.527808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.527832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.527849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.527872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.527888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.527912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.527929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.527953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.527970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.527993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.528300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.528340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.528383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.528423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.528473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.528513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.528553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.528596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.528654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.528692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.528981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.003 [2024-11-17 08:23:25.528997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.529036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.529054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.529075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.529092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.529113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.529137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.529159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.529176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.529198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.529214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.529236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.529252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.529273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.529289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.529311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.529327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.529348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.529364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.529385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.529401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:38.003 [2024-11-17 08:23:25.529423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.003 [2024-11-17 08:23:25.529440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.529487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.529524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.529563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.529612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.529668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.529706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.529745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.529782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.529820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.529856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.529894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.529930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.529968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.529990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.530007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.530044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.530080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.530126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.530163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.530200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.530238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.530276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.530313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.530350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.530387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.530425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.530480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.530517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.530538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.530554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.531251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.004 [2024-11-17 08:23:25.531282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.531317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.531334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.531362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.531377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.531405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.531421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.531462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.531480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.531508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.531524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.531552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.531569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.531599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.531616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:25.531662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:25.531679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:38.004 9179.69 IOPS, 35.86 MiB/s [2024-11-17T08:23:43.016Z] 8524.00 IOPS, 33.30 MiB/s [2024-11-17T08:23:43.016Z] 7955.73 IOPS, 31.08 MiB/s [2024-11-17T08:23:43.016Z] 7618.38 IOPS, 29.76 MiB/s [2024-11-17T08:23:43.016Z] 7689.76 IOPS, 30.04 MiB/s [2024-11-17T08:23:43.016Z] 7753.22 IOPS, 30.29 MiB/s [2024-11-17T08:23:43.016Z] 7823.84 IOPS, 30.56 MiB/s [2024-11-17T08:23:43.016Z] 7895.05 IOPS, 30.84 MiB/s [2024-11-17T08:23:43.016Z] 7955.67 IOPS, 31.08 MiB/s [2024-11-17T08:23:43.016Z] 7995.00 IOPS, 31.23 MiB/s [2024-11-17T08:23:43.016Z] 8032.09 IOPS, 31.38 MiB/s [2024-11-17T08:23:43.016Z] 8069.46 IOPS, 31.52 MiB/s [2024-11-17T08:23:43.016Z] 8115.16 IOPS, 31.70 MiB/s [2024-11-17T08:23:43.016Z] 8154.81 IOPS, 31.85 MiB/s [2024-11-17T08:23:43.016Z] [2024-11-17 08:23:39.301659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:39.301726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:39.301777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.004 [2024-11-17 08:23:39.301795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:38.004 [2024-11-17 08:23:39.301839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.301855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.301876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.301891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.301913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.301928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.301950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.301965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.301986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.302002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.302024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.302039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.302061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.302077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.302099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.302114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.302135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.302150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.302172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.302188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.302828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.302863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.302893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.302911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.302933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.302976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.303019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.303059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.303101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.303142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.303183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.303224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.303263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.303304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.303344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.303383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.303422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.303482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.303521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.303561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.303600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.303649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.303691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.303728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.303767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.005 [2024-11-17 08:23:39.303805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.303842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:38.005 [2024-11-17 08:23:39.303864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.005 [2024-11-17 08:23:39.303879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.303901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.303917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.303939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.303966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.303988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.304004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.304026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.304041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.304063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.304079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.305043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.305242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.305280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.305317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.305678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.305716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.305752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.305790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.305933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.305949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.306570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.306602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.306631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.306648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.306671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.306688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.306712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.306730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.306752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.306768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.306792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.306810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.306832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.006 [2024-11-17 08:23:39.306849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.306871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.306888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.306911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.306928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.306950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.306978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.307002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.307019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.307042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.307059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:38.006 [2024-11-17 08:23:39.307093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.006 [2024-11-17 08:23:39.307121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.307144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.007 [2024-11-17 08:23:39.307159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.307181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.307198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.307232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.307249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.307270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.307286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.307307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.307324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.307345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.307360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.307381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.307397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.307419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.307434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.307467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.307492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.307514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.307530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.308510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.007 [2024-11-17 08:23:39.308557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.007 [2024-11-17 08:23:39.308595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.308650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.308690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.007 [2024-11-17 08:23:39.308727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.007 [2024-11-17 08:23:39.308769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.007 [2024-11-17 08:23:39.308807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:38.007 [2024-11-17 08:23:39.308844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.308881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.308919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.308966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.308988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.309004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.309026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.309041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.309078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.309096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.309118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.309134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:38.007 [2024-11-17 08:23:39.309161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:38.007 [2024-11-17 08:23:39.309176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:38.007 8196.63 IOPS, 32.02 MiB/s [2024-11-17T08:23:43.019Z] 8240.75 IOPS, 32.19 MiB/s [2024-11-17T08:23:43.019Z] Received shutdown signal, test time was about 28.819991 seconds 00:23:38.007 00:23:38.007 Latency(us) 00:23:38.007 [2024-11-17T08:23:43.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.007 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:38.007 Verification LBA range: start 0x0 length 0x4000 00:23:38.007 Nvme0n1 : 28.82 8270.45 32.31 0.00 0.00 15452.64 170.26 3018551.31 00:23:38.007 [2024-11-17T08:23:43.019Z] =================================================================================================================== 00:23:38.007 [2024-11-17T08:23:43.019Z] Total : 8270.45 32.31 0.00 0.00 15452.64 170.26 3018551.31 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.007 rmmod nvme_tcp 00:23:38.007 rmmod nvme_fabrics 00:23:38.007 rmmod nvme_keyring 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 82973 ']' 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 82973 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 82973 ']' 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 82973 00:23:38.007 08:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:38.266 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.266 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82973 00:23:38.266 killing process with pid 82973 00:23:38.266 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:38.266 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:38.266 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82973' 00:23:38.266 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 82973 00:23:38.266 08:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 82973 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:39.645 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:23:39.906 00:23:39.906 real 0m36.950s 00:23:39.906 user 1m50.338s 00:23:39.906 sys 0m13.131s 00:23:39.906 ************************************ 00:23:39.906 END TEST nvmf_host_multipath_status 00:23:39.906 ************************************ 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.906 ************************************ 00:23:39.906 START TEST nvmf_discovery_remove_ifc 00:23:39.906 ************************************ 00:23:39.906 08:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:40.167 * Looking for test storage... 00:23:40.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:40.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.167 --rc genhtml_branch_coverage=1 00:23:40.167 --rc genhtml_function_coverage=1 00:23:40.167 --rc genhtml_legend=1 00:23:40.167 --rc geninfo_all_blocks=1 00:23:40.167 --rc geninfo_unexecuted_blocks=1 00:23:40.167 00:23:40.167 ' 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:40.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.167 --rc genhtml_branch_coverage=1 00:23:40.167 --rc genhtml_function_coverage=1 00:23:40.167 --rc genhtml_legend=1 00:23:40.167 --rc geninfo_all_blocks=1 00:23:40.167 --rc geninfo_unexecuted_blocks=1 00:23:40.167 00:23:40.167 ' 00:23:40.167 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:40.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.167 --rc genhtml_branch_coverage=1 00:23:40.167 --rc genhtml_function_coverage=1 00:23:40.167 --rc genhtml_legend=1 00:23:40.167 --rc geninfo_all_blocks=1 00:23:40.168 --rc geninfo_unexecuted_blocks=1 00:23:40.168 00:23:40.168 ' 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:40.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.168 --rc genhtml_branch_coverage=1 00:23:40.168 --rc genhtml_function_coverage=1 00:23:40.168 --rc genhtml_legend=1 00:23:40.168 --rc geninfo_all_blocks=1 00:23:40.168 --rc geninfo_unexecuted_blocks=1 00:23:40.168 00:23:40.168 ' 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:40.168 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:40.168 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:40.169 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:40.169 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:40.169 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:40.169 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.169 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:40.169 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:40.169 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:40.169 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:40.169 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:40.169 Cannot find device "nvmf_init_br" 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:40.428 Cannot find device "nvmf_init_br2" 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:40.428 Cannot find device "nvmf_tgt_br" 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:40.428 Cannot find device "nvmf_tgt_br2" 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:40.428 Cannot find device "nvmf_init_br" 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:40.428 Cannot find device "nvmf_init_br2" 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:40.428 Cannot find device "nvmf_tgt_br" 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:40.428 Cannot find device "nvmf_tgt_br2" 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:40.428 Cannot find device "nvmf_br" 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:40.428 Cannot find device "nvmf_init_if" 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:40.428 Cannot find device "nvmf_init_if2" 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:40.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:23:40.428 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:40.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:40.429 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:23:40.429 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:40.429 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:40.429 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:40.429 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:40.429 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:40.429 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:40.429 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:40.689 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:40.689 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:23:40.689 00:23:40.689 --- 10.0.0.3 ping statistics --- 00:23:40.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.689 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:40.689 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:40.689 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:23:40.689 00:23:40.689 --- 10.0.0.4 ping statistics --- 00:23:40.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.689 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:23:40.689 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:40.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:23:40.949 00:23:40.949 --- 10.0.0.1 ping statistics --- 00:23:40.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.949 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:40.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:23:40.949 00:23:40.949 --- 10.0.0.2 ping statistics --- 00:23:40.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.949 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.949 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.950 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.950 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=83842 00:23:40.950 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:40.950 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 83842 00:23:40.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.950 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 83842 ']' 00:23:40.950 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.950 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.950 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.950 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.950 08:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.950 [2024-11-17 08:23:45.863753] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:40.950 [2024-11-17 08:23:45.863863] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.209 [2024-11-17 08:23:46.046500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.209 [2024-11-17 08:23:46.147379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.209 [2024-11-17 08:23:46.147433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.209 [2024-11-17 08:23:46.147462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.209 [2024-11-17 08:23:46.147483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.209 [2024-11-17 08:23:46.147495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.209 [2024-11-17 08:23:46.148537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.469 [2024-11-17 08:23:46.368013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:41.728 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.728 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:41.728 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.728 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.729 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.729 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.729 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:41.729 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.729 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.989 [2024-11-17 08:23:46.745911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.989 [2024-11-17 08:23:46.754101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:41.989 null0 00:23:41.989 [2024-11-17 08:23:46.785951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:41.989 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.989 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=83874 00:23:41.989 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:41.989 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83874 /tmp/host.sock 00:23:41.989 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 83874 ']' 00:23:41.989 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:41.989 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.989 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:41.989 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:41.989 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.989 08:23:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.989 [2024-11-17 08:23:46.922368] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:41.989 [2024-11-17 08:23:46.922582] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83874 ] 00:23:42.248 [2024-11-17 08:23:47.104941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.248 [2024-11-17 08:23:47.237283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.817 08:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.817 08:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:42.817 08:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:42.817 08:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:42.817 08:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.817 08:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:42.817 08:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.817 08:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:42.817 08:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.817 08:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:43.077 [2024-11-17 08:23:47.989851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:43.336 08:23:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.336 08:23:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:43.336 08:23:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.336 08:23:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.275 [2024-11-17 08:23:49.146366] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:44.275 [2024-11-17 08:23:49.146414] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:44.275 [2024-11-17 08:23:49.146479] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:44.275 [2024-11-17 08:23:49.152421] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:23:44.275 [2024-11-17 08:23:49.214942] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:23:44.275 [2024-11-17 08:23:49.216634] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:23:44.275 [2024-11-17 08:23:49.219018] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:44.275 [2024-11-17 08:23:49.219100] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:44.275 [2024-11-17 08:23:49.219167] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:44.275 [2024-11-17 08:23:49.219194] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:23:44.275 [2024-11-17 08:23:49.219230] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:44.275 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.275 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:44.275 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.275 [2024-11-17 08:23:49.225415] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:23:44.275 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.276 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.276 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.276 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.276 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.276 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.276 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.276 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:44.276 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:23:44.276 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:23:44.276 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:44.276 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.276 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.536 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.536 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.536 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.536 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.536 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.536 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.536 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:44.536 08:23:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:45.475 08:23:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:45.475 08:23:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.475 08:23:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:45.475 08:23:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.475 08:23:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:45.475 08:23:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.475 08:23:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:45.475 08:23:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.475 08:23:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:45.475 08:23:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.412 08:23:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.412 08:23:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.412 08:23:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.412 08:23:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.412 08:23:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.413 08:23:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.413 08:23:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:46.672 08:23:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.672 08:23:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:46.672 08:23:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:47.610 08:23:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:47.610 08:23:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.610 08:23:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:47.610 08:23:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.610 08:23:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:47.610 08:23:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:47.610 08:23:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.610 08:23:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.610 08:23:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:47.610 08:23:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:48.598 08:23:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:48.598 08:23:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:48.598 08:23:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:48.598 08:23:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.598 08:23:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:48.598 08:23:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:48.598 08:23:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:48.598 08:23:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.598 08:23:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:48.598 08:23:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:49.976 08:23:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:49.976 08:23:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.976 08:23:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:49.976 08:23:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.976 08:23:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:49.976 08:23:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:49.976 08:23:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:49.976 08:23:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.977 08:23:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:49.977 08:23:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:49.977 [2024-11-17 08:23:54.636982] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:49.977 [2024-11-17 08:23:54.637078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.977 [2024-11-17 08:23:54.637098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.977 [2024-11-17 08:23:54.637117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.977 [2024-11-17 08:23:54.637129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.977 [2024-11-17 08:23:54.637143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.977 [2024-11-17 08:23:54.637155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.977 [2024-11-17 08:23:54.637168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.977 [2024-11-17 08:23:54.637179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.977 [2024-11-17 08:23:54.637192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.977 [2024-11-17 08:23:54.637203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.977 [2024-11-17 08:23:54.637215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:49.977 [2024-11-17 08:23:54.646956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:49.977 [2024-11-17 08:23:54.656957] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:49.977 [2024-11-17 08:23:54.656994] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:49.977 [2024-11-17 08:23:54.657004] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:49.977 [2024-11-17 08:23:54.657022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:49.977 [2024-11-17 08:23:54.657094] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:50.915 08:23:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:50.915 08:23:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.915 08:23:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.915 08:23:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:50.915 08:23:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.915 08:23:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:50.915 08:23:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:50.915 [2024-11-17 08:23:55.694669] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:23:50.915 [2024-11-17 08:23:55.694835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:23:50.915 [2024-11-17 08:23:55.694918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:50.915 [2024-11-17 08:23:55.695042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:50.915 [2024-11-17 08:23:55.696572] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:50.915 [2024-11-17 08:23:55.696739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:50.915 [2024-11-17 08:23:55.696789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:50.915 [2024-11-17 08:23:55.696847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:50.915 [2024-11-17 08:23:55.696888] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:50.915 [2024-11-17 08:23:55.696919] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:50.915 [2024-11-17 08:23:55.696945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:50.915 [2024-11-17 08:23:55.696985] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:50.915 [2024-11-17 08:23:55.697022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:50.915 08:23:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.915 08:23:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:50.915 08:23:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:51.853 [2024-11-17 08:23:56.695538] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:51.853 [2024-11-17 08:23:56.695597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:51.853 [2024-11-17 08:23:56.695639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:51.853 [2024-11-17 08:23:56.695653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:51.853 [2024-11-17 08:23:56.695669] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:51.853 [2024-11-17 08:23:56.695682] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:51.853 [2024-11-17 08:23:56.695693] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:51.853 [2024-11-17 08:23:56.695702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:51.853 [2024-11-17 08:23:56.695758] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:23:51.853 [2024-11-17 08:23:56.695808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.853 [2024-11-17 08:23:56.695826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.854 [2024-11-17 08:23:56.695851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.854 [2024-11-17 08:23:56.695863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.854 [2024-11-17 08:23:56.695876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.854 [2024-11-17 08:23:56.695888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.854 [2024-11-17 08:23:56.695901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.854 [2024-11-17 08:23:56.695912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.854 [2024-11-17 08:23:56.695926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.854 [2024-11-17 08:23:56.695953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.854 [2024-11-17 08:23:56.695978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:51.854 [2024-11-17 08:23:56.696478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:23:51.854 [2024-11-17 08:23:56.697515] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:51.854 [2024-11-17 08:23:56.697548] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:51.854 08:23:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:53.232 08:23:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:53.232 08:23:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.232 08:23:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:53.232 08:23:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.232 08:23:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:53.232 08:23:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:53.232 08:23:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:53.232 08:23:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.232 08:23:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:53.232 08:23:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:53.800 [2024-11-17 08:23:58.706542] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:53.800 [2024-11-17 08:23:58.706714] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:53.800 [2024-11-17 08:23:58.706779] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:53.800 [2024-11-17 08:23:58.712596] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:23:53.800 [2024-11-17 08:23:58.767044] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:23:53.800 [2024-11-17 08:23:58.768586] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61500002c180:1 started. 00:23:53.800 [2024-11-17 08:23:58.771095] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:53.800 [2024-11-17 08:23:58.771263] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:53.800 [2024-11-17 08:23:58.771350] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:53.800 [2024-11-17 08:23:58.771466] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:23:53.800 [2024-11-17 08:23:58.771575] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:53.800 [2024-11-17 08:23:58.775875] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61500002c180 was disconnected and freed. delete nvme_qpair. 00:23:54.057 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:54.057 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.057 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:54.057 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.057 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:54.057 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:54.057 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:54.057 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.058 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:54.058 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:54.058 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 83874 00:23:54.058 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 83874 ']' 00:23:54.058 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 83874 00:23:54.058 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:54.058 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.058 08:23:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83874 00:23:54.058 08:23:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.058 08:23:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.058 killing process with pid 83874 00:23:54.058 08:23:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83874' 00:23:54.058 08:23:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 83874 00:23:54.058 08:23:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 83874 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:55.439 rmmod nvme_tcp 00:23:55.439 rmmod nvme_fabrics 00:23:55.439 rmmod nvme_keyring 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 83842 ']' 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 83842 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 83842 ']' 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 83842 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83842 00:23:55.439 killing process with pid 83842 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83842' 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 83842 00:23:55.439 08:24:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 83842 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:23:56.817 00:23:56.817 real 0m16.947s 00:23:56.817 user 0m26.873s 00:23:56.817 sys 0m3.913s 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.817 08:24:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:56.817 ************************************ 00:23:56.817 END TEST nvmf_discovery_remove_ifc 00:23:56.817 ************************************ 00:23:57.076 08:24:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:57.076 08:24:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:57.076 08:24:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.076 08:24:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.076 ************************************ 00:23:57.076 START TEST nvmf_identify_kernel_target 00:23:57.076 ************************************ 00:23:57.076 08:24:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:57.076 * Looking for test storage... 00:23:57.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:57.076 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:57.076 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:23:57.076 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.338 --rc genhtml_branch_coverage=1 00:23:57.338 --rc genhtml_function_coverage=1 00:23:57.338 --rc genhtml_legend=1 00:23:57.338 --rc geninfo_all_blocks=1 00:23:57.338 --rc geninfo_unexecuted_blocks=1 00:23:57.338 00:23:57.338 ' 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.338 --rc genhtml_branch_coverage=1 00:23:57.338 --rc genhtml_function_coverage=1 00:23:57.338 --rc genhtml_legend=1 00:23:57.338 --rc geninfo_all_blocks=1 00:23:57.338 --rc geninfo_unexecuted_blocks=1 00:23:57.338 00:23:57.338 ' 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.338 --rc genhtml_branch_coverage=1 00:23:57.338 --rc genhtml_function_coverage=1 00:23:57.338 --rc genhtml_legend=1 00:23:57.338 --rc geninfo_all_blocks=1 00:23:57.338 --rc geninfo_unexecuted_blocks=1 00:23:57.338 00:23:57.338 ' 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.338 --rc genhtml_branch_coverage=1 00:23:57.338 --rc genhtml_function_coverage=1 00:23:57.338 --rc genhtml_legend=1 00:23:57.338 --rc geninfo_all_blocks=1 00:23:57.338 --rc geninfo_unexecuted_blocks=1 00:23:57.338 00:23:57.338 ' 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.338 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.339 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:57.339 Cannot find device "nvmf_init_br" 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:57.339 Cannot find device "nvmf_init_br2" 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:57.339 Cannot find device "nvmf_tgt_br" 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:57.339 Cannot find device "nvmf_tgt_br2" 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:57.339 Cannot find device "nvmf_init_br" 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:57.339 Cannot find device "nvmf_init_br2" 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:57.339 Cannot find device "nvmf_tgt_br" 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:23:57.339 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:57.599 Cannot find device "nvmf_tgt_br2" 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:57.599 Cannot find device "nvmf_br" 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:57.599 Cannot find device "nvmf_init_if" 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:57.599 Cannot find device "nvmf_init_if2" 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:57.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:57.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:57.599 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:57.864 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:57.864 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.151 ms 00:23:57.864 00:23:57.864 --- 10.0.0.3 ping statistics --- 00:23:57.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.864 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:57.864 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:57.864 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:23:57.864 00:23:57.864 --- 10.0.0.4 ping statistics --- 00:23:57.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.864 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:57.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:23:57.864 00:23:57.864 --- 10.0.0.1 ping statistics --- 00:23:57.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.864 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:57.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:23:57.864 00:23:57.864 --- 10.0.0.2 ping statistics --- 00:23:57.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.864 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:57.864 08:24:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:58.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:58.430 Waiting for block devices as requested 00:23:58.687 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:58.687 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:58.946 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:58.946 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:58.946 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:58.946 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:58.946 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:58.947 No valid GPT data, bailing 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:58.947 No valid GPT data, bailing 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:58.947 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:58.947 No valid GPT data, bailing 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:59.206 08:24:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:59.206 No valid GPT data, bailing 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -a 10.0.0.1 -t tcp -s 4420 00:23:59.206 00:23:59.206 Discovery Log Number of Records 2, Generation counter 2 00:23:59.206 =====Discovery Log Entry 0====== 00:23:59.206 trtype: tcp 00:23:59.206 adrfam: ipv4 00:23:59.206 subtype: current discovery subsystem 00:23:59.206 treq: not specified, sq flow control disable supported 00:23:59.206 portid: 1 00:23:59.206 trsvcid: 4420 00:23:59.206 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:59.206 traddr: 10.0.0.1 00:23:59.206 eflags: none 00:23:59.206 sectype: none 00:23:59.206 =====Discovery Log Entry 1====== 00:23:59.206 trtype: tcp 00:23:59.206 adrfam: ipv4 00:23:59.206 subtype: nvme subsystem 00:23:59.206 treq: not specified, sq flow control disable supported 00:23:59.206 portid: 1 00:23:59.206 trsvcid: 4420 00:23:59.206 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:59.206 traddr: 10.0.0.1 00:23:59.206 eflags: none 00:23:59.206 sectype: none 00:23:59.206 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:59.206 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:59.467 ===================================================== 00:23:59.467 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:59.467 ===================================================== 00:23:59.467 Controller Capabilities/Features 00:23:59.467 ================================ 00:23:59.467 Vendor ID: 0000 00:23:59.467 Subsystem Vendor ID: 0000 00:23:59.467 Serial Number: d125e517706b9d71845c 00:23:59.467 Model Number: Linux 00:23:59.467 Firmware Version: 6.8.9-20 00:23:59.467 Recommended Arb Burst: 0 00:23:59.467 IEEE OUI Identifier: 00 00 00 00:23:59.467 Multi-path I/O 00:23:59.467 May have multiple subsystem ports: No 00:23:59.467 May have multiple controllers: No 00:23:59.467 Associated with SR-IOV VF: No 00:23:59.467 Max Data Transfer Size: Unlimited 00:23:59.467 Max Number of Namespaces: 0 00:23:59.467 Max Number of I/O Queues: 1024 00:23:59.467 NVMe Specification Version (VS): 1.3 00:23:59.467 NVMe Specification Version (Identify): 1.3 00:23:59.467 Maximum Queue Entries: 1024 00:23:59.467 Contiguous Queues Required: No 00:23:59.467 Arbitration Mechanisms Supported 00:23:59.467 Weighted Round Robin: Not Supported 00:23:59.467 Vendor Specific: Not Supported 00:23:59.467 Reset Timeout: 7500 ms 00:23:59.467 Doorbell Stride: 4 bytes 00:23:59.467 NVM Subsystem Reset: Not Supported 00:23:59.467 Command Sets Supported 00:23:59.467 NVM Command Set: Supported 00:23:59.467 Boot Partition: Not Supported 00:23:59.467 Memory Page Size Minimum: 4096 bytes 00:23:59.467 Memory Page Size Maximum: 4096 bytes 00:23:59.467 Persistent Memory Region: Not Supported 00:23:59.467 Optional Asynchronous Events Supported 00:23:59.467 Namespace Attribute Notices: Not Supported 00:23:59.467 Firmware Activation Notices: Not Supported 00:23:59.467 ANA Change Notices: Not Supported 00:23:59.467 PLE Aggregate Log Change Notices: Not Supported 00:23:59.467 LBA Status Info Alert Notices: Not Supported 00:23:59.467 EGE Aggregate Log Change Notices: Not Supported 00:23:59.467 Normal NVM Subsystem Shutdown event: Not Supported 00:23:59.467 Zone Descriptor Change Notices: Not Supported 00:23:59.467 Discovery Log Change Notices: Supported 00:23:59.467 Controller Attributes 00:23:59.467 128-bit Host Identifier: Not Supported 00:23:59.467 Non-Operational Permissive Mode: Not Supported 00:23:59.467 NVM Sets: Not Supported 00:23:59.467 Read Recovery Levels: Not Supported 00:23:59.467 Endurance Groups: Not Supported 00:23:59.467 Predictable Latency Mode: Not Supported 00:23:59.467 Traffic Based Keep ALive: Not Supported 00:23:59.467 Namespace Granularity: Not Supported 00:23:59.467 SQ Associations: Not Supported 00:23:59.467 UUID List: Not Supported 00:23:59.467 Multi-Domain Subsystem: Not Supported 00:23:59.467 Fixed Capacity Management: Not Supported 00:23:59.467 Variable Capacity Management: Not Supported 00:23:59.467 Delete Endurance Group: Not Supported 00:23:59.467 Delete NVM Set: Not Supported 00:23:59.467 Extended LBA Formats Supported: Not Supported 00:23:59.467 Flexible Data Placement Supported: Not Supported 00:23:59.467 00:23:59.467 Controller Memory Buffer Support 00:23:59.467 ================================ 00:23:59.467 Supported: No 00:23:59.467 00:23:59.467 Persistent Memory Region Support 00:23:59.467 ================================ 00:23:59.467 Supported: No 00:23:59.467 00:23:59.467 Admin Command Set Attributes 00:23:59.467 ============================ 00:23:59.467 Security Send/Receive: Not Supported 00:23:59.467 Format NVM: Not Supported 00:23:59.467 Firmware Activate/Download: Not Supported 00:23:59.467 Namespace Management: Not Supported 00:23:59.467 Device Self-Test: Not Supported 00:23:59.467 Directives: Not Supported 00:23:59.467 NVMe-MI: Not Supported 00:23:59.467 Virtualization Management: Not Supported 00:23:59.467 Doorbell Buffer Config: Not Supported 00:23:59.467 Get LBA Status Capability: Not Supported 00:23:59.467 Command & Feature Lockdown Capability: Not Supported 00:23:59.467 Abort Command Limit: 1 00:23:59.467 Async Event Request Limit: 1 00:23:59.467 Number of Firmware Slots: N/A 00:23:59.468 Firmware Slot 1 Read-Only: N/A 00:23:59.468 Firmware Activation Without Reset: N/A 00:23:59.468 Multiple Update Detection Support: N/A 00:23:59.468 Firmware Update Granularity: No Information Provided 00:23:59.468 Per-Namespace SMART Log: No 00:23:59.468 Asymmetric Namespace Access Log Page: Not Supported 00:23:59.468 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:59.468 Command Effects Log Page: Not Supported 00:23:59.468 Get Log Page Extended Data: Supported 00:23:59.468 Telemetry Log Pages: Not Supported 00:23:59.468 Persistent Event Log Pages: Not Supported 00:23:59.468 Supported Log Pages Log Page: May Support 00:23:59.468 Commands Supported & Effects Log Page: Not Supported 00:23:59.468 Feature Identifiers & Effects Log Page:May Support 00:23:59.468 NVMe-MI Commands & Effects Log Page: May Support 00:23:59.468 Data Area 4 for Telemetry Log: Not Supported 00:23:59.468 Error Log Page Entries Supported: 1 00:23:59.468 Keep Alive: Not Supported 00:23:59.468 00:23:59.468 NVM Command Set Attributes 00:23:59.468 ========================== 00:23:59.468 Submission Queue Entry Size 00:23:59.468 Max: 1 00:23:59.468 Min: 1 00:23:59.468 Completion Queue Entry Size 00:23:59.468 Max: 1 00:23:59.468 Min: 1 00:23:59.468 Number of Namespaces: 0 00:23:59.468 Compare Command: Not Supported 00:23:59.468 Write Uncorrectable Command: Not Supported 00:23:59.468 Dataset Management Command: Not Supported 00:23:59.468 Write Zeroes Command: Not Supported 00:23:59.468 Set Features Save Field: Not Supported 00:23:59.468 Reservations: Not Supported 00:23:59.468 Timestamp: Not Supported 00:23:59.468 Copy: Not Supported 00:23:59.468 Volatile Write Cache: Not Present 00:23:59.468 Atomic Write Unit (Normal): 1 00:23:59.468 Atomic Write Unit (PFail): 1 00:23:59.468 Atomic Compare & Write Unit: 1 00:23:59.468 Fused Compare & Write: Not Supported 00:23:59.468 Scatter-Gather List 00:23:59.468 SGL Command Set: Supported 00:23:59.468 SGL Keyed: Not Supported 00:23:59.468 SGL Bit Bucket Descriptor: Not Supported 00:23:59.468 SGL Metadata Pointer: Not Supported 00:23:59.468 Oversized SGL: Not Supported 00:23:59.468 SGL Metadata Address: Not Supported 00:23:59.468 SGL Offset: Supported 00:23:59.468 Transport SGL Data Block: Not Supported 00:23:59.468 Replay Protected Memory Block: Not Supported 00:23:59.468 00:23:59.468 Firmware Slot Information 00:23:59.468 ========================= 00:23:59.468 Active slot: 0 00:23:59.468 00:23:59.468 00:23:59.468 Error Log 00:23:59.468 ========= 00:23:59.468 00:23:59.468 Active Namespaces 00:23:59.468 ================= 00:23:59.468 Discovery Log Page 00:23:59.468 ================== 00:23:59.468 Generation Counter: 2 00:23:59.468 Number of Records: 2 00:23:59.468 Record Format: 0 00:23:59.468 00:23:59.468 Discovery Log Entry 0 00:23:59.468 ---------------------- 00:23:59.468 Transport Type: 3 (TCP) 00:23:59.468 Address Family: 1 (IPv4) 00:23:59.468 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:59.468 Entry Flags: 00:23:59.468 Duplicate Returned Information: 0 00:23:59.468 Explicit Persistent Connection Support for Discovery: 0 00:23:59.468 Transport Requirements: 00:23:59.468 Secure Channel: Not Specified 00:23:59.468 Port ID: 1 (0x0001) 00:23:59.468 Controller ID: 65535 (0xffff) 00:23:59.468 Admin Max SQ Size: 32 00:23:59.468 Transport Service Identifier: 4420 00:23:59.468 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:59.468 Transport Address: 10.0.0.1 00:23:59.468 Discovery Log Entry 1 00:23:59.468 ---------------------- 00:23:59.468 Transport Type: 3 (TCP) 00:23:59.468 Address Family: 1 (IPv4) 00:23:59.468 Subsystem Type: 2 (NVM Subsystem) 00:23:59.468 Entry Flags: 00:23:59.468 Duplicate Returned Information: 0 00:23:59.468 Explicit Persistent Connection Support for Discovery: 0 00:23:59.468 Transport Requirements: 00:23:59.468 Secure Channel: Not Specified 00:23:59.468 Port ID: 1 (0x0001) 00:23:59.468 Controller ID: 65535 (0xffff) 00:23:59.468 Admin Max SQ Size: 32 00:23:59.468 Transport Service Identifier: 4420 00:23:59.468 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:59.468 Transport Address: 10.0.0.1 00:23:59.468 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:59.728 get_feature(0x01) failed 00:23:59.728 get_feature(0x02) failed 00:23:59.728 get_feature(0x04) failed 00:23:59.728 ===================================================== 00:23:59.728 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:59.728 ===================================================== 00:23:59.728 Controller Capabilities/Features 00:23:59.728 ================================ 00:23:59.728 Vendor ID: 0000 00:23:59.728 Subsystem Vendor ID: 0000 00:23:59.728 Serial Number: 7ddaea35def9ba4bef02 00:23:59.728 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:59.728 Firmware Version: 6.8.9-20 00:23:59.728 Recommended Arb Burst: 6 00:23:59.728 IEEE OUI Identifier: 00 00 00 00:23:59.728 Multi-path I/O 00:23:59.728 May have multiple subsystem ports: Yes 00:23:59.728 May have multiple controllers: Yes 00:23:59.728 Associated with SR-IOV VF: No 00:23:59.728 Max Data Transfer Size: Unlimited 00:23:59.728 Max Number of Namespaces: 1024 00:23:59.728 Max Number of I/O Queues: 128 00:23:59.728 NVMe Specification Version (VS): 1.3 00:23:59.728 NVMe Specification Version (Identify): 1.3 00:23:59.728 Maximum Queue Entries: 1024 00:23:59.728 Contiguous Queues Required: No 00:23:59.728 Arbitration Mechanisms Supported 00:23:59.728 Weighted Round Robin: Not Supported 00:23:59.728 Vendor Specific: Not Supported 00:23:59.728 Reset Timeout: 7500 ms 00:23:59.728 Doorbell Stride: 4 bytes 00:23:59.728 NVM Subsystem Reset: Not Supported 00:23:59.728 Command Sets Supported 00:23:59.728 NVM Command Set: Supported 00:23:59.728 Boot Partition: Not Supported 00:23:59.728 Memory Page Size Minimum: 4096 bytes 00:23:59.728 Memory Page Size Maximum: 4096 bytes 00:23:59.728 Persistent Memory Region: Not Supported 00:23:59.728 Optional Asynchronous Events Supported 00:23:59.728 Namespace Attribute Notices: Supported 00:23:59.728 Firmware Activation Notices: Not Supported 00:23:59.728 ANA Change Notices: Supported 00:23:59.728 PLE Aggregate Log Change Notices: Not Supported 00:23:59.728 LBA Status Info Alert Notices: Not Supported 00:23:59.728 EGE Aggregate Log Change Notices: Not Supported 00:23:59.728 Normal NVM Subsystem Shutdown event: Not Supported 00:23:59.728 Zone Descriptor Change Notices: Not Supported 00:23:59.728 Discovery Log Change Notices: Not Supported 00:23:59.728 Controller Attributes 00:23:59.728 128-bit Host Identifier: Supported 00:23:59.728 Non-Operational Permissive Mode: Not Supported 00:23:59.728 NVM Sets: Not Supported 00:23:59.728 Read Recovery Levels: Not Supported 00:23:59.728 Endurance Groups: Not Supported 00:23:59.728 Predictable Latency Mode: Not Supported 00:23:59.728 Traffic Based Keep ALive: Supported 00:23:59.728 Namespace Granularity: Not Supported 00:23:59.728 SQ Associations: Not Supported 00:23:59.728 UUID List: Not Supported 00:23:59.728 Multi-Domain Subsystem: Not Supported 00:23:59.728 Fixed Capacity Management: Not Supported 00:23:59.728 Variable Capacity Management: Not Supported 00:23:59.728 Delete Endurance Group: Not Supported 00:23:59.728 Delete NVM Set: Not Supported 00:23:59.728 Extended LBA Formats Supported: Not Supported 00:23:59.728 Flexible Data Placement Supported: Not Supported 00:23:59.728 00:23:59.728 Controller Memory Buffer Support 00:23:59.728 ================================ 00:23:59.728 Supported: No 00:23:59.728 00:23:59.728 Persistent Memory Region Support 00:23:59.728 ================================ 00:23:59.728 Supported: No 00:23:59.728 00:23:59.728 Admin Command Set Attributes 00:23:59.728 ============================ 00:23:59.728 Security Send/Receive: Not Supported 00:23:59.728 Format NVM: Not Supported 00:23:59.728 Firmware Activate/Download: Not Supported 00:23:59.728 Namespace Management: Not Supported 00:23:59.728 Device Self-Test: Not Supported 00:23:59.728 Directives: Not Supported 00:23:59.728 NVMe-MI: Not Supported 00:23:59.728 Virtualization Management: Not Supported 00:23:59.728 Doorbell Buffer Config: Not Supported 00:23:59.728 Get LBA Status Capability: Not Supported 00:23:59.728 Command & Feature Lockdown Capability: Not Supported 00:23:59.728 Abort Command Limit: 4 00:23:59.728 Async Event Request Limit: 4 00:23:59.728 Number of Firmware Slots: N/A 00:23:59.728 Firmware Slot 1 Read-Only: N/A 00:23:59.728 Firmware Activation Without Reset: N/A 00:23:59.728 Multiple Update Detection Support: N/A 00:23:59.728 Firmware Update Granularity: No Information Provided 00:23:59.728 Per-Namespace SMART Log: Yes 00:23:59.728 Asymmetric Namespace Access Log Page: Supported 00:23:59.728 ANA Transition Time : 10 sec 00:23:59.728 00:23:59.728 Asymmetric Namespace Access Capabilities 00:23:59.728 ANA Optimized State : Supported 00:23:59.728 ANA Non-Optimized State : Supported 00:23:59.728 ANA Inaccessible State : Supported 00:23:59.728 ANA Persistent Loss State : Supported 00:23:59.728 ANA Change State : Supported 00:23:59.728 ANAGRPID is not changed : No 00:23:59.728 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:59.728 00:23:59.728 ANA Group Identifier Maximum : 128 00:23:59.728 Number of ANA Group Identifiers : 128 00:23:59.728 Max Number of Allowed Namespaces : 1024 00:23:59.728 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:59.728 Command Effects Log Page: Supported 00:23:59.728 Get Log Page Extended Data: Supported 00:23:59.728 Telemetry Log Pages: Not Supported 00:23:59.728 Persistent Event Log Pages: Not Supported 00:23:59.728 Supported Log Pages Log Page: May Support 00:23:59.728 Commands Supported & Effects Log Page: Not Supported 00:23:59.728 Feature Identifiers & Effects Log Page:May Support 00:23:59.728 NVMe-MI Commands & Effects Log Page: May Support 00:23:59.728 Data Area 4 for Telemetry Log: Not Supported 00:23:59.728 Error Log Page Entries Supported: 128 00:23:59.728 Keep Alive: Supported 00:23:59.728 Keep Alive Granularity: 1000 ms 00:23:59.728 00:23:59.728 NVM Command Set Attributes 00:23:59.728 ========================== 00:23:59.728 Submission Queue Entry Size 00:23:59.728 Max: 64 00:23:59.728 Min: 64 00:23:59.728 Completion Queue Entry Size 00:23:59.728 Max: 16 00:23:59.728 Min: 16 00:23:59.728 Number of Namespaces: 1024 00:23:59.728 Compare Command: Not Supported 00:23:59.728 Write Uncorrectable Command: Not Supported 00:23:59.728 Dataset Management Command: Supported 00:23:59.728 Write Zeroes Command: Supported 00:23:59.729 Set Features Save Field: Not Supported 00:23:59.729 Reservations: Not Supported 00:23:59.729 Timestamp: Not Supported 00:23:59.729 Copy: Not Supported 00:23:59.729 Volatile Write Cache: Present 00:23:59.729 Atomic Write Unit (Normal): 1 00:23:59.729 Atomic Write Unit (PFail): 1 00:23:59.729 Atomic Compare & Write Unit: 1 00:23:59.729 Fused Compare & Write: Not Supported 00:23:59.729 Scatter-Gather List 00:23:59.729 SGL Command Set: Supported 00:23:59.729 SGL Keyed: Not Supported 00:23:59.729 SGL Bit Bucket Descriptor: Not Supported 00:23:59.729 SGL Metadata Pointer: Not Supported 00:23:59.729 Oversized SGL: Not Supported 00:23:59.729 SGL Metadata Address: Not Supported 00:23:59.729 SGL Offset: Supported 00:23:59.729 Transport SGL Data Block: Not Supported 00:23:59.729 Replay Protected Memory Block: Not Supported 00:23:59.729 00:23:59.729 Firmware Slot Information 00:23:59.729 ========================= 00:23:59.729 Active slot: 0 00:23:59.729 00:23:59.729 Asymmetric Namespace Access 00:23:59.729 =========================== 00:23:59.729 Change Count : 0 00:23:59.729 Number of ANA Group Descriptors : 1 00:23:59.729 ANA Group Descriptor : 0 00:23:59.729 ANA Group ID : 1 00:23:59.729 Number of NSID Values : 1 00:23:59.729 Change Count : 0 00:23:59.729 ANA State : 1 00:23:59.729 Namespace Identifier : 1 00:23:59.729 00:23:59.729 Commands Supported and Effects 00:23:59.729 ============================== 00:23:59.729 Admin Commands 00:23:59.729 -------------- 00:23:59.729 Get Log Page (02h): Supported 00:23:59.729 Identify (06h): Supported 00:23:59.729 Abort (08h): Supported 00:23:59.729 Set Features (09h): Supported 00:23:59.729 Get Features (0Ah): Supported 00:23:59.729 Asynchronous Event Request (0Ch): Supported 00:23:59.729 Keep Alive (18h): Supported 00:23:59.729 I/O Commands 00:23:59.729 ------------ 00:23:59.729 Flush (00h): Supported 00:23:59.729 Write (01h): Supported LBA-Change 00:23:59.729 Read (02h): Supported 00:23:59.729 Write Zeroes (08h): Supported LBA-Change 00:23:59.729 Dataset Management (09h): Supported 00:23:59.729 00:23:59.729 Error Log 00:23:59.729 ========= 00:23:59.729 Entry: 0 00:23:59.729 Error Count: 0x3 00:23:59.729 Submission Queue Id: 0x0 00:23:59.729 Command Id: 0x5 00:23:59.729 Phase Bit: 0 00:23:59.729 Status Code: 0x2 00:23:59.729 Status Code Type: 0x0 00:23:59.729 Do Not Retry: 1 00:23:59.729 Error Location: 0x28 00:23:59.729 LBA: 0x0 00:23:59.729 Namespace: 0x0 00:23:59.729 Vendor Log Page: 0x0 00:23:59.729 ----------- 00:23:59.729 Entry: 1 00:23:59.729 Error Count: 0x2 00:23:59.729 Submission Queue Id: 0x0 00:23:59.729 Command Id: 0x5 00:23:59.729 Phase Bit: 0 00:23:59.729 Status Code: 0x2 00:23:59.729 Status Code Type: 0x0 00:23:59.729 Do Not Retry: 1 00:23:59.729 Error Location: 0x28 00:23:59.729 LBA: 0x0 00:23:59.729 Namespace: 0x0 00:23:59.729 Vendor Log Page: 0x0 00:23:59.729 ----------- 00:23:59.729 Entry: 2 00:23:59.729 Error Count: 0x1 00:23:59.729 Submission Queue Id: 0x0 00:23:59.729 Command Id: 0x4 00:23:59.729 Phase Bit: 0 00:23:59.729 Status Code: 0x2 00:23:59.729 Status Code Type: 0x0 00:23:59.729 Do Not Retry: 1 00:23:59.729 Error Location: 0x28 00:23:59.729 LBA: 0x0 00:23:59.729 Namespace: 0x0 00:23:59.729 Vendor Log Page: 0x0 00:23:59.729 00:23:59.729 Number of Queues 00:23:59.729 ================ 00:23:59.729 Number of I/O Submission Queues: 128 00:23:59.729 Number of I/O Completion Queues: 128 00:23:59.729 00:23:59.729 ZNS Specific Controller Data 00:23:59.729 ============================ 00:23:59.729 Zone Append Size Limit: 0 00:23:59.729 00:23:59.729 00:23:59.729 Active Namespaces 00:23:59.729 ================= 00:23:59.729 get_feature(0x05) failed 00:23:59.729 Namespace ID:1 00:23:59.729 Command Set Identifier: NVM (00h) 00:23:59.729 Deallocate: Supported 00:23:59.729 Deallocated/Unwritten Error: Not Supported 00:23:59.729 Deallocated Read Value: Unknown 00:23:59.729 Deallocate in Write Zeroes: Not Supported 00:23:59.729 Deallocated Guard Field: 0xFFFF 00:23:59.729 Flush: Supported 00:23:59.729 Reservation: Not Supported 00:23:59.729 Namespace Sharing Capabilities: Multiple Controllers 00:23:59.729 Size (in LBAs): 1310720 (5GiB) 00:23:59.729 Capacity (in LBAs): 1310720 (5GiB) 00:23:59.729 Utilization (in LBAs): 1310720 (5GiB) 00:23:59.729 UUID: c48c9fc9-dfd8-4af8-97d3-365c0a526c1c 00:23:59.729 Thin Provisioning: Not Supported 00:23:59.729 Per-NS Atomic Units: Yes 00:23:59.729 Atomic Boundary Size (Normal): 0 00:23:59.729 Atomic Boundary Size (PFail): 0 00:23:59.729 Atomic Boundary Offset: 0 00:23:59.729 NGUID/EUI64 Never Reused: No 00:23:59.729 ANA group ID: 1 00:23:59.729 Namespace Write Protected: No 00:23:59.729 Number of LBA Formats: 1 00:23:59.729 Current LBA Format: LBA Format #00 00:23:59.729 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:23:59.729 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:59.989 rmmod nvme_tcp 00:23:59.989 rmmod nvme_fabrics 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:59.989 08:24:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:00.250 08:24:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:01.187 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:01.447 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:01.447 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:01.447 ************************************ 00:24:01.447 END TEST nvmf_identify_kernel_target 00:24:01.447 ************************************ 00:24:01.447 00:24:01.447 real 0m4.494s 00:24:01.447 user 0m1.432s 00:24:01.447 sys 0m2.395s 00:24:01.447 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.447 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.447 08:24:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:01.447 08:24:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:01.447 08:24:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.447 08:24:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.707 ************************************ 00:24:01.707 START TEST nvmf_auth_host 00:24:01.707 ************************************ 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:01.707 * Looking for test storage... 00:24:01.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:01.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.707 --rc genhtml_branch_coverage=1 00:24:01.707 --rc genhtml_function_coverage=1 00:24:01.707 --rc genhtml_legend=1 00:24:01.707 --rc geninfo_all_blocks=1 00:24:01.707 --rc geninfo_unexecuted_blocks=1 00:24:01.707 00:24:01.707 ' 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:01.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.707 --rc genhtml_branch_coverage=1 00:24:01.707 --rc genhtml_function_coverage=1 00:24:01.707 --rc genhtml_legend=1 00:24:01.707 --rc geninfo_all_blocks=1 00:24:01.707 --rc geninfo_unexecuted_blocks=1 00:24:01.707 00:24:01.707 ' 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:01.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.707 --rc genhtml_branch_coverage=1 00:24:01.707 --rc genhtml_function_coverage=1 00:24:01.707 --rc genhtml_legend=1 00:24:01.707 --rc geninfo_all_blocks=1 00:24:01.707 --rc geninfo_unexecuted_blocks=1 00:24:01.707 00:24:01.707 ' 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:01.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.707 --rc genhtml_branch_coverage=1 00:24:01.707 --rc genhtml_function_coverage=1 00:24:01.707 --rc genhtml_legend=1 00:24:01.707 --rc geninfo_all_blocks=1 00:24:01.707 --rc geninfo_unexecuted_blocks=1 00:24:01.707 00:24:01.707 ' 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.707 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:01.968 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:01.968 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:01.969 Cannot find device "nvmf_init_br" 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:01.969 Cannot find device "nvmf_init_br2" 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:01.969 Cannot find device "nvmf_tgt_br" 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:01.969 Cannot find device "nvmf_tgt_br2" 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:01.969 Cannot find device "nvmf_init_br" 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:01.969 Cannot find device "nvmf_init_br2" 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:01.969 Cannot find device "nvmf_tgt_br" 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:01.969 Cannot find device "nvmf_tgt_br2" 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:01.969 Cannot find device "nvmf_br" 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:01.969 Cannot find device "nvmf_init_if" 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:01.969 Cannot find device "nvmf_init_if2" 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:01.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:01.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:01.969 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:02.229 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:02.229 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:02.229 08:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:02.229 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:02.489 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:02.489 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:24:02.489 00:24:02.489 --- 10.0.0.3 ping statistics --- 00:24:02.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.489 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:02.489 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:02.489 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:24:02.489 00:24:02.489 --- 10.0.0.4 ping statistics --- 00:24:02.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.489 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:02.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:24:02.489 00:24:02.489 --- 10.0.0.1 ping statistics --- 00:24:02.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.489 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:02.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:24:02.489 00:24:02.489 --- 10.0.0.2 ping statistics --- 00:24:02.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.489 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=84911 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 84911 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 84911 ']' 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.489 08:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=429d38904466ba364f92b801a547e594 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3Je 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 429d38904466ba364f92b801a547e594 0 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 429d38904466ba364f92b801a547e594 0 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=429d38904466ba364f92b801a547e594 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3Je 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3Je 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3Je 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:03.428 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:03.429 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ccaaccfc1fc535214833df696221838346fd485e2c2fe2f577e1731c85f12094 00:24:03.429 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:03.429 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DaN 00:24:03.429 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ccaaccfc1fc535214833df696221838346fd485e2c2fe2f577e1731c85f12094 3 00:24:03.429 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ccaaccfc1fc535214833df696221838346fd485e2c2fe2f577e1731c85f12094 3 00:24:03.429 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.429 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.429 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ccaaccfc1fc535214833df696221838346fd485e2c2fe2f577e1731c85f12094 00:24:03.429 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:03.429 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.688 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DaN 00:24:03.688 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DaN 00:24:03.688 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.DaN 00:24:03.688 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:03.688 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.688 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.688 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.688 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:03.688 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:03.688 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:03.688 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e7ef3eb4bba68f68817431ddea13a00e045fc49b6462bc78 00:24:03.688 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.WTp 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e7ef3eb4bba68f68817431ddea13a00e045fc49b6462bc78 0 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e7ef3eb4bba68f68817431ddea13a00e045fc49b6462bc78 0 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e7ef3eb4bba68f68817431ddea13a00e045fc49b6462bc78 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.WTp 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.WTp 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.WTp 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0c57334defea04a9dbcc4755b8abaabce39b65cff86ddc8d 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eBo 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0c57334defea04a9dbcc4755b8abaabce39b65cff86ddc8d 2 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0c57334defea04a9dbcc4755b8abaabce39b65cff86ddc8d 2 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0c57334defea04a9dbcc4755b8abaabce39b65cff86ddc8d 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eBo 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eBo 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.eBo 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5913c22477f32193cc829c523564ff7a 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jLr 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5913c22477f32193cc829c523564ff7a 1 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5913c22477f32193cc829c523564ff7a 1 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5913c22477f32193cc829c523564ff7a 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:03.689 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jLr 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jLr 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jLr 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a664d6e8ec9d360ffd6042a02d40b621 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.zGA 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a664d6e8ec9d360ffd6042a02d40b621 1 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a664d6e8ec9d360ffd6042a02d40b621 1 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a664d6e8ec9d360ffd6042a02d40b621 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.zGA 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.zGA 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.zGA 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0885274cce97330e02540a9f397344293a5bf730ea70c03a 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1AH 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0885274cce97330e02540a9f397344293a5bf730ea70c03a 2 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0885274cce97330e02540a9f397344293a5bf730ea70c03a 2 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0885274cce97330e02540a9f397344293a5bf730ea70c03a 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1AH 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1AH 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.1AH 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f97786c7e4d7fae7f279e79fa871e088 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.C1k 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f97786c7e4d7fae7f279e79fa871e088 0 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f97786c7e4d7fae7f279e79fa871e088 0 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f97786c7e4d7fae7f279e79fa871e088 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.C1k 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.C1k 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.C1k 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=346d4192e4e615c735b55fd116b764a03b2428014ca848b7ff4d6270746981e9 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Vv8 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 346d4192e4e615c735b55fd116b764a03b2428014ca848b7ff4d6270746981e9 3 00:24:03.949 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 346d4192e4e615c735b55fd116b764a03b2428014ca848b7ff4d6270746981e9 3 00:24:03.950 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.950 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:03.950 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=346d4192e4e615c735b55fd116b764a03b2428014ca848b7ff4d6270746981e9 00:24:03.950 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:03.950 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:04.209 08:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Vv8 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Vv8 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Vv8 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 84911 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 84911 ']' 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3Je 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.DaN ]] 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DaN 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.209 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.WTp 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.eBo ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eBo 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jLr 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.zGA ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zGA 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.1AH 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.C1k ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.C1k 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Vv8 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:04.468 08:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:05.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:05.037 Waiting for block devices as requested 00:24:05.037 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:05.296 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:06.235 08:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:06.235 08:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:06.235 08:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:06.235 08:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:06.235 08:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:06.235 08:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:06.235 08:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:06.235 08:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:06.235 08:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:06.235 No valid GPT data, bailing 00:24:06.235 08:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:06.235 No valid GPT data, bailing 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:06.235 No valid GPT data, bailing 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:06.235 No valid GPT data, bailing 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:24:06.235 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -a 10.0.0.1 -t tcp -s 4420 00:24:06.495 00:24:06.495 Discovery Log Number of Records 2, Generation counter 2 00:24:06.495 =====Discovery Log Entry 0====== 00:24:06.495 trtype: tcp 00:24:06.495 adrfam: ipv4 00:24:06.495 subtype: current discovery subsystem 00:24:06.495 treq: not specified, sq flow control disable supported 00:24:06.495 portid: 1 00:24:06.495 trsvcid: 4420 00:24:06.495 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:06.495 traddr: 10.0.0.1 00:24:06.495 eflags: none 00:24:06.495 sectype: none 00:24:06.495 =====Discovery Log Entry 1====== 00:24:06.495 trtype: tcp 00:24:06.495 adrfam: ipv4 00:24:06.495 subtype: nvme subsystem 00:24:06.495 treq: not specified, sq flow control disable supported 00:24:06.495 portid: 1 00:24:06.495 trsvcid: 4420 00:24:06.495 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:06.495 traddr: 10.0.0.1 00:24:06.495 eflags: none 00:24:06.495 sectype: none 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.495 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.496 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.756 nvme0n1 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.756 nvme0n1 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.756 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.016 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.017 nvme0n1 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.017 08:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.277 nvme0n1 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.277 nvme0n1 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.277 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.537 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.538 nvme0n1 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.538 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.798 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.799 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.799 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.799 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.799 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.799 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.799 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.799 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:07.799 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.799 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.059 nvme0n1 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.059 08:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.319 nvme0n1 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.319 nvme0n1 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.319 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.579 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.580 nvme0n1 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.580 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.840 nvme0n1 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.840 08:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:09.409 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:09.409 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:09.409 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:09.409 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:09.409 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.409 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.409 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:09.409 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.410 nvme0n1 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.410 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.670 nvme0n1 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.670 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.930 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.930 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.930 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:09.930 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.930 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.930 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:09.930 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:09.930 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 nvme0n1 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.931 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.191 08:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.191 nvme0n1 00:24:10.191 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.191 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.191 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.191 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.191 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.191 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.191 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.191 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.191 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.191 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.451 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.452 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.452 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.452 nvme0n1 00:24:10.452 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.452 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.452 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.452 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.452 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.452 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.452 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.452 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.452 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.452 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.711 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.711 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.711 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.711 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:10.711 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.711 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.711 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:10.711 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:10.711 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:10.711 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:10.711 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.711 08:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:12.091 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:12.091 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:12.091 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:12.091 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.092 08:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.092 nvme0n1 00:24:12.092 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.092 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.092 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.092 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.092 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.092 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.092 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.092 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.092 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.092 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.352 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.612 nvme0n1 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.612 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.872 nvme0n1 00:24:12.872 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.872 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.872 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.872 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.872 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.872 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.872 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.872 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.872 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.872 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.149 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.150 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:13.150 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.150 08:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.444 nvme0n1 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:13.444 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.445 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.715 nvme0n1 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:13.715 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.716 08:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.292 nvme0n1 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.292 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.293 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.293 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.293 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.293 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.293 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:14.293 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.293 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.872 nvme0n1 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.872 08:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.441 nvme0n1 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.441 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.442 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.011 nvme0n1 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.011 08:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.580 nvme0n1 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.580 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.840 nvme0n1 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.840 nvme0n1 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.840 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.099 nvme0n1 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.099 08:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.099 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.357 nvme0n1 00:24:17.357 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.357 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.357 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.357 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.357 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.357 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.357 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.357 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.357 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.357 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.358 nvme0n1 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.358 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.618 nvme0n1 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.618 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.878 nvme0n1 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.878 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.138 nvme0n1 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:18.138 08:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.138 nvme0n1 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.138 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.398 nvme0n1 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.398 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.399 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.658 nvme0n1 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.658 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.659 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.918 nvme0n1 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.918 08:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.178 nvme0n1 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.178 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.438 nvme0n1 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.438 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.439 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.439 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.439 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.439 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:19.439 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.439 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.698 nvme0n1 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.698 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.699 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.958 nvme0n1 00:24:19.958 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.958 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.958 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.958 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.958 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.958 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:20.217 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.218 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:20.218 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.218 08:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.218 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.477 nvme0n1 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.477 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.478 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.478 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.478 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.478 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.478 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.737 nvme0n1 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.737 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.997 08:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.256 nvme0n1 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:21.256 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.257 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.516 nvme0n1 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:21.516 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.517 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.085 nvme0n1 00:24:22.085 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.085 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.085 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.085 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.085 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.085 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.085 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.085 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.085 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.085 08:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.085 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.085 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.085 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:22.085 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.085 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.085 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.085 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:22.085 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:22.085 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:22.085 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.085 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:22.085 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.086 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.654 nvme0n1 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.654 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.655 08:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.224 nvme0n1 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.224 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.793 nvme0n1 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.793 08:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.361 nvme0n1 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.361 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.620 nvme0n1 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.620 nvme0n1 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.620 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.621 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.880 nvme0n1 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:24.880 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.881 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.141 nvme0n1 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.141 08:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.141 nvme0n1 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:25.141 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.401 nvme0n1 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.401 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.402 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.661 nvme0n1 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.661 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.662 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.921 nvme0n1 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:25.921 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.922 nvme0n1 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.922 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.181 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.182 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.182 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.182 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.182 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.182 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.182 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.182 08:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.182 nvme0n1 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.182 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.441 nvme0n1 00:24:26.441 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.441 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.441 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.441 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.441 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.441 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.441 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.442 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.701 nvme0n1 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:26.701 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.702 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.962 nvme0n1 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.962 08:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.222 nvme0n1 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.222 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.482 nvme0n1 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.482 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.742 nvme0n1 00:24:27.742 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.742 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.742 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.742 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.742 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.742 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:28.001 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.002 08:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.261 nvme0n1 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.261 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.262 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.521 nvme0n1 00:24:28.521 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.521 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.521 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.521 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.521 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.521 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.521 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.521 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.521 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.521 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.781 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.040 nvme0n1 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:29.040 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.041 08:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.301 nvme0n1 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDI5ZDM4OTA0NDY2YmEzNjRmOTJiODAxYTU0N2U1OTT6CLAu: 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: ]] 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2NhYWNjZmMxZmM1MzUyMTQ4MzNkZjY5NjIyMTgzODM0NmZkNDg1ZTJjMmZlMmY1NzdlMTczMWM4NWYxMjA5NIfVtOo=: 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.301 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.561 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.819 nvme0n1 00:24:29.819 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.819 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.819 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.820 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.820 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.820 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.079 08:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.648 nvme0n1 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.648 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.217 nvme0n1 00:24:31.217 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.217 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.217 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.217 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.217 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.217 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.217 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.217 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.217 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.217 08:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDg4NTI3NGNjZTk3MzMwZTAyNTQwYTlmMzk3MzQ0MjkzYTViZjczMGVhNzBjMDNhTH8vFQ==: 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: ]] 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjk3Nzg2YzdlNGQ3ZmFlN2YyNzllNzlmYTg3MWUwODiiV98i: 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.217 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.785 nvme0n1 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.785 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzQ2ZDQxOTJlNGU2MTVjNzM1YjU1ZmQxMTZiNzY0YTAzYjI0MjgwMTRjYTg0OGI3ZmY0ZDYyNzA3NDY5ODFlOQQH+r4=: 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.786 08:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.355 nvme0n1 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.355 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.355 request: 00:24:32.355 { 00:24:32.355 "name": "nvme0", 00:24:32.355 "trtype": "tcp", 00:24:32.355 "traddr": "10.0.0.1", 00:24:32.355 "adrfam": "ipv4", 00:24:32.355 "trsvcid": "4420", 00:24:32.355 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:32.355 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:32.355 "prchk_reftag": false, 00:24:32.355 "prchk_guard": false, 00:24:32.356 "hdgst": false, 00:24:32.356 "ddgst": false, 00:24:32.356 "allow_unrecognized_csi": false, 00:24:32.356 "method": "bdev_nvme_attach_controller", 00:24:32.356 "req_id": 1 00:24:32.356 } 00:24:32.356 Got JSON-RPC error response 00:24:32.356 response: 00:24:32.356 { 00:24:32.356 "code": -5, 00:24:32.356 "message": "Input/output error" 00:24:32.356 } 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.356 request: 00:24:32.356 { 00:24:32.356 "name": "nvme0", 00:24:32.356 "trtype": "tcp", 00:24:32.356 "traddr": "10.0.0.1", 00:24:32.356 "adrfam": "ipv4", 00:24:32.356 "trsvcid": "4420", 00:24:32.356 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:32.356 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:32.356 "prchk_reftag": false, 00:24:32.356 "prchk_guard": false, 00:24:32.356 "hdgst": false, 00:24:32.356 "ddgst": false, 00:24:32.356 "dhchap_key": "key2", 00:24:32.356 "allow_unrecognized_csi": false, 00:24:32.356 "method": "bdev_nvme_attach_controller", 00:24:32.356 "req_id": 1 00:24:32.356 } 00:24:32.356 Got JSON-RPC error response 00:24:32.356 response: 00:24:32.356 { 00:24:32.356 "code": -5, 00:24:32.356 "message": "Input/output error" 00:24:32.356 } 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.356 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.627 request: 00:24:32.627 { 00:24:32.627 "name": "nvme0", 00:24:32.627 "trtype": "tcp", 00:24:32.627 "traddr": "10.0.0.1", 00:24:32.627 "adrfam": "ipv4", 00:24:32.627 "trsvcid": "4420", 00:24:32.627 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:32.627 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:32.627 "prchk_reftag": false, 00:24:32.627 "prchk_guard": false, 00:24:32.627 "hdgst": false, 00:24:32.627 "ddgst": false, 00:24:32.627 "dhchap_key": "key1", 00:24:32.627 "dhchap_ctrlr_key": "ckey2", 00:24:32.627 "allow_unrecognized_csi": false, 00:24:32.627 "method": "bdev_nvme_attach_controller", 00:24:32.627 "req_id": 1 00:24:32.627 } 00:24:32.627 Got JSON-RPC error response 00:24:32.627 response: 00:24:32.627 { 00:24:32.627 "code": -5, 00:24:32.627 "message": "Input/output error" 00:24:32.627 } 00:24:32.627 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:32.627 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:32.627 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:32.627 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:32.628 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:32.628 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:32.628 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.628 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.628 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.628 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.628 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.628 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.628 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.628 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.628 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.628 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.629 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:32.629 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.629 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.629 nvme0n1 00:24:32.629 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.629 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:32.629 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.629 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.629 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.629 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:32.629 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:32.629 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:32.630 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:32.631 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:32.631 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:32.631 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:32.631 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:32.631 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:32.631 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.631 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.631 request: 00:24:32.631 { 00:24:32.631 "name": "nvme0", 00:24:32.631 "dhchap_key": "key1", 00:24:32.631 "dhchap_ctrlr_key": "ckey2", 00:24:32.631 "method": "bdev_nvme_set_keys", 00:24:32.631 "req_id": 1 00:24:32.631 } 00:24:32.631 Got JSON-RPC error response 00:24:32.631 response: 00:24:32.631 { 00:24:32.631 "code": -13, 00:24:32.631 "message": "Permission denied" 00:24:32.631 } 00:24:32.631 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:32.632 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:32.632 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:32.632 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:32.632 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:32.632 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:32.632 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.632 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.632 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.895 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.895 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:32.895 08:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdlZjNlYjRiYmE2OGY2ODgxNzQzMWRkZWExM2EwMGUwNDVmYzQ5YjY0NjJiYzc4estPvg==: 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: ]] 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGM1NzMzNGRlZmVhMDRhOWRiY2M0NzU1YjhhYmFhYmNlMzliNjVjZmY4NmRkYzhkNy0oLg==: 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.833 nvme0n1 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTkxM2MyMjQ3N2YzMjE5M2NjODI5YzUyMzU2NGZmN2HbBQ+v: 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: ]] 00:24:33.833 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTY2NGQ2ZThlYzlkMzYwZmZkNjA0MmEwMmQ0MGI2MjHWEQHZ: 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.834 request: 00:24:33.834 { 00:24:33.834 "name": "nvme0", 00:24:33.834 "dhchap_key": "key2", 00:24:33.834 "dhchap_ctrlr_key": "ckey1", 00:24:33.834 "method": "bdev_nvme_set_keys", 00:24:33.834 "req_id": 1 00:24:33.834 } 00:24:33.834 Got JSON-RPC error response 00:24:33.834 response: 00:24:33.834 { 00:24:33.834 "code": -13, 00:24:33.834 "message": "Permission denied" 00:24:33.834 } 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:33.834 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:34.093 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.093 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:34.093 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.093 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.093 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.093 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:34.093 08:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:35.031 08:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:35.031 rmmod nvme_tcp 00:24:35.031 rmmod nvme_fabrics 00:24:35.031 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:35.031 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:35.031 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:35.031 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 84911 ']' 00:24:35.031 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 84911 00:24:35.031 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 84911 ']' 00:24:35.031 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 84911 00:24:35.031 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:35.031 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.031 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84911 00:24:35.290 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.290 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.290 killing process with pid 84911 00:24:35.290 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84911' 00:24:35.290 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 84911 00:24:35.290 08:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 84911 00:24:36.228 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:36.228 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:36.228 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:36.228 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:36.228 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:36.228 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:36.228 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:36.228 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:36.228 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:36.228 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:36.228 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:36.228 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.488 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:36.759 08:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:37.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:37.745 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:37.745 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:37.745 08:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3Je /tmp/spdk.key-null.WTp /tmp/spdk.key-sha256.jLr /tmp/spdk.key-sha384.1AH /tmp/spdk.key-sha512.Vv8 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:24:37.745 08:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:38.314 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:38.314 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:38.314 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:38.573 00:24:38.574 real 0m36.923s 00:24:38.574 user 0m33.490s 00:24:38.574 sys 0m6.016s 00:24:38.574 08:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.574 08:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.574 ************************************ 00:24:38.574 END TEST nvmf_auth_host 00:24:38.574 ************************************ 00:24:38.574 08:24:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:38.574 08:24:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:38.574 08:24:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:38.574 08:24:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.574 08:24:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.574 ************************************ 00:24:38.574 START TEST nvmf_digest 00:24:38.574 ************************************ 00:24:38.574 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:38.834 * Looking for test storage... 00:24:38.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:38.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.834 --rc genhtml_branch_coverage=1 00:24:38.834 --rc genhtml_function_coverage=1 00:24:38.834 --rc genhtml_legend=1 00:24:38.834 --rc geninfo_all_blocks=1 00:24:38.834 --rc geninfo_unexecuted_blocks=1 00:24:38.834 00:24:38.834 ' 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:38.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.834 --rc genhtml_branch_coverage=1 00:24:38.834 --rc genhtml_function_coverage=1 00:24:38.834 --rc genhtml_legend=1 00:24:38.834 --rc geninfo_all_blocks=1 00:24:38.834 --rc geninfo_unexecuted_blocks=1 00:24:38.834 00:24:38.834 ' 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:38.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.834 --rc genhtml_branch_coverage=1 00:24:38.834 --rc genhtml_function_coverage=1 00:24:38.834 --rc genhtml_legend=1 00:24:38.834 --rc geninfo_all_blocks=1 00:24:38.834 --rc geninfo_unexecuted_blocks=1 00:24:38.834 00:24:38.834 ' 00:24:38.834 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:38.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.835 --rc genhtml_branch_coverage=1 00:24:38.835 --rc genhtml_function_coverage=1 00:24:38.835 --rc genhtml_legend=1 00:24:38.835 --rc geninfo_all_blocks=1 00:24:38.835 --rc geninfo_unexecuted_blocks=1 00:24:38.835 00:24:38.835 ' 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:38.835 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:38.835 Cannot find device "nvmf_init_br" 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:38.835 Cannot find device "nvmf_init_br2" 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:38.835 Cannot find device "nvmf_tgt_br" 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:38.835 Cannot find device "nvmf_tgt_br2" 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:24:38.835 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:39.096 Cannot find device "nvmf_init_br" 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:39.096 Cannot find device "nvmf_init_br2" 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:39.096 Cannot find device "nvmf_tgt_br" 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:39.096 Cannot find device "nvmf_tgt_br2" 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:39.096 Cannot find device "nvmf_br" 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:39.096 Cannot find device "nvmf_init_if" 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:39.096 Cannot find device "nvmf_init_if2" 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:39.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:39.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:39.096 08:24:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:39.096 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:39.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:39.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:24:39.356 00:24:39.356 --- 10.0.0.3 ping statistics --- 00:24:39.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.356 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:39.356 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:39.356 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.121 ms 00:24:39.356 00:24:39.356 --- 10.0.0.4 ping statistics --- 00:24:39.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.356 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:39.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:24:39.356 00:24:39.356 --- 10.0.0.1 ping statistics --- 00:24:39.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.356 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:39.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:24:39.356 00:24:39.356 --- 10.0.0.2 ping statistics --- 00:24:39.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.356 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:39.356 ************************************ 00:24:39.356 START TEST nvmf_digest_clean 00:24:39.356 ************************************ 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=86562 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 86562 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86562 ']' 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:39.356 08:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:39.616 [2024-11-17 08:24:44.424209] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:39.616 [2024-11-17 08:24:44.424346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.616 [2024-11-17 08:24:44.610884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.876 [2024-11-17 08:24:44.743524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.876 [2024-11-17 08:24:44.743601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.876 [2024-11-17 08:24:44.743617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.876 [2024-11-17 08:24:44.743656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.876 [2024-11-17 08:24:44.743670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.876 [2024-11-17 08:24:44.745053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.445 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.445 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:40.445 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:40.445 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:40.445 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:40.445 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.445 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:40.445 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:40.445 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:40.445 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.445 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:40.704 [2024-11-17 08:24:45.523259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:40.704 null0 00:24:40.704 [2024-11-17 08:24:45.672236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.704 [2024-11-17 08:24:45.696380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86594 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86594 /var/tmp/bperf.sock 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86594 ']' 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:40.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.704 08:24:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:40.964 [2024-11-17 08:24:45.805605] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:40.964 [2024-11-17 08:24:45.805708] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86594 ] 00:24:41.223 [2024-11-17 08:24:45.982995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.223 [2024-11-17 08:24:46.086814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.792 08:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.792 08:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:41.792 08:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:41.792 08:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:41.792 08:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:42.051 [2024-11-17 08:24:47.011586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:42.310 08:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:42.310 08:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:42.570 nvme0n1 00:24:42.570 08:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:42.570 08:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:42.570 Running I/O for 2 seconds... 00:24:44.518 17145.00 IOPS, 66.97 MiB/s [2024-11-17T08:24:49.789Z] 17272.00 IOPS, 67.47 MiB/s 00:24:44.777 Latency(us) 00:24:44.777 [2024-11-17T08:24:49.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.777 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:44.777 nvme0n1 : 2.01 17266.42 67.45 0.00 0.00 7408.80 7158.95 21055.74 00:24:44.777 [2024-11-17T08:24:49.789Z] =================================================================================================================== 00:24:44.777 [2024-11-17T08:24:49.789Z] Total : 17266.42 67.45 0.00 0.00 7408.80 7158.95 21055.74 00:24:44.777 { 00:24:44.777 "results": [ 00:24:44.777 { 00:24:44.777 "job": "nvme0n1", 00:24:44.777 "core_mask": "0x2", 00:24:44.777 "workload": "randread", 00:24:44.777 "status": "finished", 00:24:44.777 "queue_depth": 128, 00:24:44.777 "io_size": 4096, 00:24:44.777 "runtime": 2.008059, 00:24:44.777 "iops": 17266.424940701443, 00:24:44.777 "mibps": 67.44697242461501, 00:24:44.777 "io_failed": 0, 00:24:44.777 "io_timeout": 0, 00:24:44.777 "avg_latency_us": 7408.800541158636, 00:24:44.777 "min_latency_us": 7158.952610441767, 00:24:44.777 "max_latency_us": 21055.74297188755 00:24:44.777 } 00:24:44.777 ], 00:24:44.777 "core_count": 1 00:24:44.777 } 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:44.777 | select(.opcode=="crc32c") 00:24:44.777 | "\(.module_name) \(.executed)"' 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86594 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86594 ']' 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86594 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.777 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86594 00:24:45.037 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:45.037 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:45.037 killing process with pid 86594 00:24:45.037 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86594' 00:24:45.037 Received shutdown signal, test time was about 2.000000 seconds 00:24:45.037 00:24:45.037 Latency(us) 00:24:45.037 [2024-11-17T08:24:50.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.037 [2024-11-17T08:24:50.049Z] =================================================================================================================== 00:24:45.037 [2024-11-17T08:24:50.049Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.037 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86594 00:24:45.037 08:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86594 00:24:45.975 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:45.975 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86661 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86661 /var/tmp/bperf.sock 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86661 ']' 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.976 08:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:45.976 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:45.976 Zero copy mechanism will not be used. 00:24:45.976 [2024-11-17 08:24:50.837613] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:45.976 [2024-11-17 08:24:50.837727] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86661 ] 00:24:46.234 [2024-11-17 08:24:51.015569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.234 [2024-11-17 08:24:51.117270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.803 08:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.803 08:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:46.803 08:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:46.803 08:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:46.803 08:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:47.062 [2024-11-17 08:24:52.044077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:47.321 08:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:47.321 08:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:47.580 nvme0n1 00:24:47.580 08:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:47.580 08:24:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:47.580 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:47.580 Zero copy mechanism will not be used. 00:24:47.580 Running I/O for 2 seconds... 00:24:49.528 6128.00 IOPS, 766.00 MiB/s [2024-11-17T08:24:54.799Z] 6136.00 IOPS, 767.00 MiB/s 00:24:49.787 Latency(us) 00:24:49.787 [2024-11-17T08:24:54.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.787 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:49.787 nvme0n1 : 2.00 6138.56 767.32 0.00 0.00 2603.83 2434.57 4079.55 00:24:49.787 [2024-11-17T08:24:54.799Z] =================================================================================================================== 00:24:49.787 [2024-11-17T08:24:54.799Z] Total : 6138.56 767.32 0.00 0.00 2603.83 2434.57 4079.55 00:24:49.787 { 00:24:49.787 "results": [ 00:24:49.787 { 00:24:49.787 "job": "nvme0n1", 00:24:49.787 "core_mask": "0x2", 00:24:49.787 "workload": "randread", 00:24:49.787 "status": "finished", 00:24:49.787 "queue_depth": 16, 00:24:49.787 "io_size": 131072, 00:24:49.787 "runtime": 2.001774, 00:24:49.787 "iops": 6138.555101624859, 00:24:49.787 "mibps": 767.3193877031074, 00:24:49.787 "io_failed": 0, 00:24:49.787 "io_timeout": 0, 00:24:49.787 "avg_latency_us": 2603.8265060240965, 00:24:49.787 "min_latency_us": 2434.570281124498, 00:24:49.787 "max_latency_us": 4079.5502008032126 00:24:49.787 } 00:24:49.787 ], 00:24:49.787 "core_count": 1 00:24:49.787 } 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:49.787 | select(.opcode=="crc32c") 00:24:49.787 | "\(.module_name) \(.executed)"' 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86661 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86661 ']' 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86661 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.787 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86661 00:24:50.047 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:50.047 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:50.047 killing process with pid 86661 00:24:50.047 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86661' 00:24:50.047 Received shutdown signal, test time was about 2.000000 seconds 00:24:50.047 00:24:50.047 Latency(us) 00:24:50.047 [2024-11-17T08:24:55.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.047 [2024-11-17T08:24:55.059Z] =================================================================================================================== 00:24:50.047 [2024-11-17T08:24:55.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.047 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86661 00:24:50.047 08:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86661 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86730 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86730 /var/tmp/bperf.sock 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86730 ']' 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.984 08:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:50.984 [2024-11-17 08:24:55.971558] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:50.984 [2024-11-17 08:24:55.971705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86730 ] 00:24:51.244 [2024-11-17 08:24:56.157271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.502 [2024-11-17 08:24:56.262247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.071 08:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:52.071 08:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:52.071 08:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:52.071 08:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:52.071 08:24:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:52.330 [2024-11-17 08:24:57.149231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:52.330 08:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:52.330 08:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:52.592 nvme0n1 00:24:52.592 08:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:52.592 08:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:52.852 Running I/O for 2 seconds... 00:24:54.725 18416.00 IOPS, 71.94 MiB/s [2024-11-17T08:24:59.737Z] 18606.00 IOPS, 72.68 MiB/s 00:24:54.725 Latency(us) 00:24:54.725 [2024-11-17T08:24:59.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.725 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:54.725 nvme0n1 : 2.01 18601.27 72.66 0.00 0.00 6876.24 6579.92 14002.07 00:24:54.725 [2024-11-17T08:24:59.737Z] =================================================================================================================== 00:24:54.725 [2024-11-17T08:24:59.737Z] Total : 18601.27 72.66 0.00 0.00 6876.24 6579.92 14002.07 00:24:54.725 { 00:24:54.725 "results": [ 00:24:54.725 { 00:24:54.725 "job": "nvme0n1", 00:24:54.725 "core_mask": "0x2", 00:24:54.725 "workload": "randwrite", 00:24:54.725 "status": "finished", 00:24:54.725 "queue_depth": 128, 00:24:54.725 "io_size": 4096, 00:24:54.725 "runtime": 2.00739, 00:24:54.725 "iops": 18601.268313581317, 00:24:54.725 "mibps": 72.66120434992702, 00:24:54.725 "io_failed": 0, 00:24:54.725 "io_timeout": 0, 00:24:54.725 "avg_latency_us": 6876.241973657888, 00:24:54.725 "min_latency_us": 6579.919678714859, 00:24:54.725 "max_latency_us": 14002.06907630522 00:24:54.725 } 00:24:54.725 ], 00:24:54.725 "core_count": 1 00:24:54.725 } 00:24:54.725 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:54.725 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:54.725 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:54.725 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:54.725 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:54.725 | select(.opcode=="crc32c") 00:24:54.726 | "\(.module_name) \(.executed)"' 00:24:54.985 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:54.985 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:54.985 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:54.985 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:54.985 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86730 00:24:54.985 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86730 ']' 00:24:54.985 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86730 00:24:54.985 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:54.985 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.985 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86730 00:24:54.985 killing process with pid 86730 00:24:54.985 Received shutdown signal, test time was about 2.000000 seconds 00:24:54.985 00:24:54.985 Latency(us) 00:24:54.985 [2024-11-17T08:24:59.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.985 [2024-11-17T08:24:59.998Z] =================================================================================================================== 00:24:54.986 [2024-11-17T08:24:59.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.986 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:54.986 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:54.986 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86730' 00:24:54.986 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86730 00:24:54.986 08:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86730 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86801 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86801 /var/tmp/bperf.sock 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86801 ']' 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:55.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.923 08:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:56.182 [2024-11-17 08:25:00.954917] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:56.182 [2024-11-17 08:25:00.955037] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86801 ] 00:24:56.182 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:56.182 Zero copy mechanism will not be used. 00:24:56.182 [2024-11-17 08:25:01.131927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.442 [2024-11-17 08:25:01.232274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.014 08:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.014 08:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:57.014 08:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:57.014 08:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:57.014 08:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:57.278 [2024-11-17 08:25:02.147111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:57.278 08:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:57.278 08:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:57.539 nvme0n1 00:24:57.539 08:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:57.540 08:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:57.799 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:57.799 Zero copy mechanism will not be used. 00:24:57.799 Running I/O for 2 seconds... 00:24:59.675 6353.00 IOPS, 794.12 MiB/s [2024-11-17T08:25:04.687Z] 6393.00 IOPS, 799.12 MiB/s 00:24:59.675 Latency(us) 00:24:59.675 [2024-11-17T08:25:04.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.675 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:59.675 nvme0n1 : 2.00 6390.07 798.76 0.00 0.00 2499.37 1895.02 7948.54 00:24:59.675 [2024-11-17T08:25:04.687Z] =================================================================================================================== 00:24:59.675 [2024-11-17T08:25:04.687Z] Total : 6390.07 798.76 0.00 0.00 2499.37 1895.02 7948.54 00:24:59.675 { 00:24:59.675 "results": [ 00:24:59.675 { 00:24:59.675 "job": "nvme0n1", 00:24:59.675 "core_mask": "0x2", 00:24:59.675 "workload": "randwrite", 00:24:59.675 "status": "finished", 00:24:59.675 "queue_depth": 16, 00:24:59.675 "io_size": 131072, 00:24:59.675 "runtime": 2.003735, 00:24:59.675 "iops": 6390.066550716537, 00:24:59.675 "mibps": 798.7583188395671, 00:24:59.675 "io_failed": 0, 00:24:59.675 "io_timeout": 0, 00:24:59.675 "avg_latency_us": 2499.3745759670983, 00:24:59.675 "min_latency_us": 1895.0168674698796, 00:24:59.675 "max_latency_us": 7948.54297188755 00:24:59.675 } 00:24:59.675 ], 00:24:59.675 "core_count": 1 00:24:59.675 } 00:24:59.675 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:59.675 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:59.675 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:59.675 | select(.opcode=="crc32c") 00:24:59.675 | "\(.module_name) \(.executed)"' 00:24:59.675 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:59.675 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86801 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86801 ']' 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86801 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86801 00:24:59.936 killing process with pid 86801 00:24:59.936 Received shutdown signal, test time was about 2.000000 seconds 00:24:59.936 00:24:59.936 Latency(us) 00:24:59.936 [2024-11-17T08:25:04.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.936 [2024-11-17T08:25:04.948Z] =================================================================================================================== 00:24:59.936 [2024-11-17T08:25:04.948Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86801' 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86801 00:24:59.936 08:25:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86801 00:25:01.316 08:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 86562 00:25:01.316 08:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86562 ']' 00:25:01.316 08:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86562 00:25:01.316 08:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:01.316 08:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.316 08:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86562 00:25:01.316 killing process with pid 86562 00:25:01.316 08:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.316 08:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.316 08:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86562' 00:25:01.316 08:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86562 00:25:01.316 08:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86562 00:25:02.255 00:25:02.255 real 0m22.903s 00:25:02.255 user 0m40.690s 00:25:02.255 sys 0m6.421s 00:25:02.255 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.255 ************************************ 00:25:02.255 END TEST nvmf_digest_clean 00:25:02.255 ************************************ 00:25:02.255 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:02.515 ************************************ 00:25:02.515 START TEST nvmf_digest_error 00:25:02.515 ************************************ 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=86903 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 86903 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86903 ']' 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.515 08:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.515 [2024-11-17 08:25:07.424378] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:02.515 [2024-11-17 08:25:07.424538] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.775 [2024-11-17 08:25:07.613536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.775 [2024-11-17 08:25:07.750254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.775 [2024-11-17 08:25:07.750313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.775 [2024-11-17 08:25:07.750330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.775 [2024-11-17 08:25:07.750352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.775 [2024-11-17 08:25:07.750366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.775 [2024-11-17 08:25:07.751632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.420 [2024-11-17 08:25:08.267675] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.420 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.684 [2024-11-17 08:25:08.531116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:03.684 null0 00:25:03.684 [2024-11-17 08:25:08.688390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.943 [2024-11-17 08:25:08.712636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:03.943 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.943 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:03.943 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:03.943 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:03.943 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:03.943 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:03.943 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86941 00:25:03.943 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86941 /var/tmp/bperf.sock 00:25:03.943 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:03.943 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86941 ']' 00:25:03.944 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:03.944 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:03.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:03.944 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:03.944 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:03.944 08:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.944 [2024-11-17 08:25:08.816334] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:03.944 [2024-11-17 08:25:08.816469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86941 ] 00:25:04.203 [2024-11-17 08:25:08.997553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.203 [2024-11-17 08:25:09.103290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.461 [2024-11-17 08:25:09.307497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:04.719 08:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:04.719 08:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:04.719 08:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:04.719 08:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:04.979 08:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:04.979 08:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.979 08:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:04.979 08:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.979 08:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:04.979 08:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.237 nvme0n1 00:25:05.237 08:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:05.237 08:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.237 08:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:05.237 08:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.237 08:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:05.237 08:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:05.237 Running I/O for 2 seconds... 00:25:05.237 [2024-11-17 08:25:10.200270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.237 [2024-11-17 08:25:10.200330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.237 [2024-11-17 08:25:10.200350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.237 [2024-11-17 08:25:10.215345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.237 [2024-11-17 08:25:10.215393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.237 [2024-11-17 08:25:10.215409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.237 [2024-11-17 08:25:10.230373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.237 [2024-11-17 08:25:10.230420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.237 [2024-11-17 08:25:10.230436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.237 [2024-11-17 08:25:10.245317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.237 [2024-11-17 08:25:10.245355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.237 [2024-11-17 08:25:10.245377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.496 [2024-11-17 08:25:10.260740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.496 [2024-11-17 08:25:10.260779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.496 [2024-11-17 08:25:10.260804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.496 [2024-11-17 08:25:10.275916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.496 [2024-11-17 08:25:10.275962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.496 [2024-11-17 08:25:10.275976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.496 [2024-11-17 08:25:10.290987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.496 [2024-11-17 08:25:10.291032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.496 [2024-11-17 08:25:10.291046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.496 [2024-11-17 08:25:10.305728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.496 [2024-11-17 08:25:10.305766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.496 [2024-11-17 08:25:10.305783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.496 [2024-11-17 08:25:10.320480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.320518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.320536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.497 [2024-11-17 08:25:10.335268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.335311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.335325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.497 [2024-11-17 08:25:10.349944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.349989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.350003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.497 [2024-11-17 08:25:10.364625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.364668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.364682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.497 [2024-11-17 08:25:10.379361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.379401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.379418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.497 [2024-11-17 08:25:10.394092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.394131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.394147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.497 [2024-11-17 08:25:10.408746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.408783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.408803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.497 [2024-11-17 08:25:10.423484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.423530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.423544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.497 [2024-11-17 08:25:10.438183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.438229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.438244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.497 [2024-11-17 08:25:10.452865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.452903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.452920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.497 [2024-11-17 08:25:10.467523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.467560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.467578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.497 [2024-11-17 08:25:10.482250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.482289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.482306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.497 [2024-11-17 08:25:10.496933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.497 [2024-11-17 08:25:10.496977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.497 [2024-11-17 08:25:10.496991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.511912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.756 [2024-11-17 08:25:10.511956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.756 [2024-11-17 08:25:10.511971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.526642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.756 [2024-11-17 08:25:10.526679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.756 [2024-11-17 08:25:10.526697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.541347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.756 [2024-11-17 08:25:10.541386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.756 [2024-11-17 08:25:10.541404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.556065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.756 [2024-11-17 08:25:10.556109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.756 [2024-11-17 08:25:10.556124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.570732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.756 [2024-11-17 08:25:10.570777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.756 [2024-11-17 08:25:10.570792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.585519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.756 [2024-11-17 08:25:10.585571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.756 [2024-11-17 08:25:10.585585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.600748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.756 [2024-11-17 08:25:10.600787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.756 [2024-11-17 08:25:10.600810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.616231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.756 [2024-11-17 08:25:10.616271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.756 [2024-11-17 08:25:10.616289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.631693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.756 [2024-11-17 08:25:10.631735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.756 [2024-11-17 08:25:10.631756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.646540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.756 [2024-11-17 08:25:10.646583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.756 [2024-11-17 08:25:10.646607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.661541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.756 [2024-11-17 08:25:10.661580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.756 [2024-11-17 08:25:10.661597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.676304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.756 [2024-11-17 08:25:10.676344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.756 [2024-11-17 08:25:10.676366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.756 [2024-11-17 08:25:10.691132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.757 [2024-11-17 08:25:10.691177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.757 [2024-11-17 08:25:10.691191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.757 [2024-11-17 08:25:10.705851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.757 [2024-11-17 08:25:10.705895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.757 [2024-11-17 08:25:10.705909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.757 [2024-11-17 08:25:10.720550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.757 [2024-11-17 08:25:10.720594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.757 [2024-11-17 08:25:10.720609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.757 [2024-11-17 08:25:10.735285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.757 [2024-11-17 08:25:10.735324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.757 [2024-11-17 08:25:10.735341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.757 [2024-11-17 08:25:10.749985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.757 [2024-11-17 08:25:10.750023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.757 [2024-11-17 08:25:10.750040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.757 [2024-11-17 08:25:10.764713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.757 [2024-11-17 08:25:10.764756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.757 [2024-11-17 08:25:10.764770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.016 [2024-11-17 08:25:10.779750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.016 [2024-11-17 08:25:10.779792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-11-17 08:25:10.779807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.016 [2024-11-17 08:25:10.794391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.016 [2024-11-17 08:25:10.794437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-11-17 08:25:10.794472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.016 [2024-11-17 08:25:10.809133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.016 [2024-11-17 08:25:10.809171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-11-17 08:25:10.809188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.016 [2024-11-17 08:25:10.823896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.016 [2024-11-17 08:25:10.823934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.016 [2024-11-17 08:25:10.823950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:10.838714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:10.838759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:10.838773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:10.853393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:10.853437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:10.853463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:10.868145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:10.868190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:10.868205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:10.882893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:10.882930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:10.882947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:10.897608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:10.897647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:10.897664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:10.912294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:10.912338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:10.912353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:10.927011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:10.927054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:10.927069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:10.941733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:10.941777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:10.941792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:10.956435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:10.956484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:10.956502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:10.971206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:10.971245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:10.971262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:10.985909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:10.985950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:10.985964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:11.000606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:11.000652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:11.000666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.017 [2024-11-17 08:25:11.015420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.017 [2024-11-17 08:25:11.015479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.017 [2024-11-17 08:25:11.015495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.030427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.030482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.030499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.045332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.045372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.045389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.060035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.060078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.060092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.074800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.074843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.074857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.089559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.089603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.089618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.104282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.104321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.104341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.118981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.119018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.119036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.140099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.140146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.140160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.154900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.154939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.154956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.169979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.170017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.170033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 16952.00 IOPS, 66.22 MiB/s [2024-11-17T08:25:11.289Z] [2024-11-17 08:25:11.185114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.185160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.185175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.199854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.199898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.199913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.214587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.214634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.214648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.229238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.229278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.229295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.243897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.243936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.243953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.258660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.258703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.258717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.277 [2024-11-17 08:25:11.273374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.277 [2024-11-17 08:25:11.273458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.277 [2024-11-17 08:25:11.273473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.536 [2024-11-17 08:25:11.288370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.536 [2024-11-17 08:25:11.288425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.536 [2024-11-17 08:25:11.288450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.303743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.303782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.303800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.319039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.319080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.319098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.334484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.334538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.334553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.349638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.349681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.349707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.364578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.364615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.364633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.379590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.379628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.379649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.394387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.394431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.394457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.409243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.409288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.409303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.424159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.424197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.424216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.439248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.439288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.439306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.454241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.454282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.454296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.469214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.469260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.469274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.483995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.484040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.484054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.498787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.498825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.498842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.513694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.513732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.513749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.528326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.528371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.528386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.537 [2024-11-17 08:25:11.543117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.537 [2024-11-17 08:25:11.543165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.537 [2024-11-17 08:25:11.543180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.558308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.558346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.558363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.573084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.573124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.573141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.587968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.588007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.588024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.602831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.602874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.602888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.617494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.617538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.617553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.632684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.632722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.632740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.648096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.648136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.648156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.663549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.663589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.663603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.678757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.678795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.678810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.693858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.693896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.693910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.708523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.708560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.708574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.723180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.723219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.723233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.737905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.737942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.737956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.752567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.752605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.752619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.767313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.767358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.767373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.782052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.782271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.782290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.797 [2024-11-17 08:25:11.797125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.797 [2024-11-17 08:25:11.797286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.797 [2024-11-17 08:25:11.797306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.812535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.812579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.812593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.827275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.827320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.827335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.842081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.842124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.842139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.856882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.856924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.856939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.871621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.871663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.871677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.886401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.886466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.886482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.901142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.901302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.901321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.916123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.916299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.916319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.931054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.931219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.931239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.946028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.946184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.946203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.960922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.961081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.961100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.975833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.975988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.976007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:11.990873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:11.991031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:11.991050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:12.005855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:12.006013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:12.006031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:12.020759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:12.020915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:12.020934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:12.035690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:12.035843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:12.035864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:12.050650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:12.050804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:12.050824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.057 [2024-11-17 08:25:12.065536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.057 [2024-11-17 08:25:12.065691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.057 [2024-11-17 08:25:12.065710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.317 [2024-11-17 08:25:12.080693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.317 [2024-11-17 08:25:12.080847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.317 [2024-11-17 08:25:12.080865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.317 [2024-11-17 08:25:12.101997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.317 [2024-11-17 08:25:12.102044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.317 [2024-11-17 08:25:12.102058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.317 [2024-11-17 08:25:12.116749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.317 [2024-11-17 08:25:12.116792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.317 [2024-11-17 08:25:12.116806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.317 [2024-11-17 08:25:12.131418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.317 [2024-11-17 08:25:12.131481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.317 [2024-11-17 08:25:12.131496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.317 [2024-11-17 08:25:12.146150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.317 [2024-11-17 08:25:12.146195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.317 [2024-11-17 08:25:12.146210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.317 [2024-11-17 08:25:12.160839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.317 [2024-11-17 08:25:12.160882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.317 [2024-11-17 08:25:12.160897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.317 16952.00 IOPS, 66.22 MiB/s [2024-11-17T08:25:12.329Z] [2024-11-17 08:25:12.175595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.317 [2024-11-17 08:25:12.175637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.317 [2024-11-17 08:25:12.175652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.317 00:25:07.317 Latency(us) 00:25:07.317 [2024-11-17T08:25:12.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.317 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:07.317 nvme0n1 : 2.01 16968.70 66.28 0.00 0.00 7537.56 7053.67 28635.81 00:25:07.317 [2024-11-17T08:25:12.329Z] =================================================================================================================== 00:25:07.317 [2024-11-17T08:25:12.329Z] Total : 16968.70 66.28 0.00 0.00 7537.56 7053.67 28635.81 00:25:07.317 { 00:25:07.317 "results": [ 00:25:07.317 { 00:25:07.317 "job": "nvme0n1", 00:25:07.317 "core_mask": "0x2", 00:25:07.317 "workload": "randread", 00:25:07.317 "status": "finished", 00:25:07.317 "queue_depth": 128, 00:25:07.317 "io_size": 4096, 00:25:07.317 "runtime": 2.005575, 00:25:07.317 "iops": 16968.699749448413, 00:25:07.317 "mibps": 66.28398339628286, 00:25:07.317 "io_failed": 0, 00:25:07.317 "io_timeout": 0, 00:25:07.317 "avg_latency_us": 7537.561714653631, 00:25:07.317 "min_latency_us": 7053.673895582329, 00:25:07.317 "max_latency_us": 28635.81044176707 00:25:07.317 } 00:25:07.317 ], 00:25:07.317 "core_count": 1 00:25:07.317 } 00:25:07.317 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:07.317 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:07.317 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:07.317 | .driver_specific 00:25:07.317 | .nvme_error 00:25:07.317 | .status_code 00:25:07.317 | .command_transient_transport_error' 00:25:07.317 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:07.577 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 133 > 0 )) 00:25:07.577 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86941 00:25:07.577 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86941 ']' 00:25:07.577 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86941 00:25:07.577 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:07.577 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.577 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86941 00:25:07.577 killing process with pid 86941 00:25:07.577 Received shutdown signal, test time was about 2.000000 seconds 00:25:07.577 00:25:07.577 Latency(us) 00:25:07.577 [2024-11-17T08:25:12.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.577 [2024-11-17T08:25:12.589Z] =================================================================================================================== 00:25:07.577 [2024-11-17T08:25:12.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.577 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:07.577 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:07.577 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86941' 00:25:07.577 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86941 00:25:07.577 08:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86941 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87002 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87002 /var/tmp/bperf.sock 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 87002 ']' 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:08.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.514 08:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:08.514 [2024-11-17 08:25:13.494200] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:08.514 [2024-11-17 08:25:13.494553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:25:08.514 Zero copy mechanism will not be used. 00:25:08.514 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87002 ] 00:25:08.773 [2024-11-17 08:25:13.678644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.032 [2024-11-17 08:25:13.790583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.032 [2024-11-17 08:25:13.986708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:09.291 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.291 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:09.291 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:09.291 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:09.550 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:09.550 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.550 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:09.550 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.550 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.550 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.810 nvme0n1 00:25:09.810 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:09.810 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.810 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:09.810 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.810 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:09.810 08:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:10.070 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:10.070 Zero copy mechanism will not be used. 00:25:10.070 Running I/O for 2 seconds... 00:25:10.070 [2024-11-17 08:25:14.855204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.070 [2024-11-17 08:25:14.857898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.070 [2024-11-17 08:25:14.857924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.070 [2024-11-17 08:25:14.863676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.070 [2024-11-17 08:25:14.863733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.070 [2024-11-17 08:25:14.863751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.070 [2024-11-17 08:25:14.869313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.070 [2024-11-17 08:25:14.869538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.070 [2024-11-17 08:25:14.869560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.070 [2024-11-17 08:25:14.875039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.070 [2024-11-17 08:25:14.875087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.070 [2024-11-17 08:25:14.875108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.070 [2024-11-17 08:25:14.880649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.070 [2024-11-17 08:25:14.880818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.070 [2024-11-17 08:25:14.880845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.070 [2024-11-17 08:25:14.886512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.070 [2024-11-17 08:25:14.886569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.070 [2024-11-17 08:25:14.886584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.070 [2024-11-17 08:25:14.892205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.070 [2024-11-17 08:25:14.892259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.070 [2024-11-17 08:25:14.892275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.070 [2024-11-17 08:25:14.897828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.070 [2024-11-17 08:25:14.897874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.070 [2024-11-17 08:25:14.897893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.070 [2024-11-17 08:25:14.903399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.070 [2024-11-17 08:25:14.903578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.070 [2024-11-17 08:25:14.903605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.909065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.909113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.909134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.914546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.914601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.914617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.920050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.920102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.920117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.925589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.925635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.925670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.931138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.931302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.931328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.936736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.936790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.936806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.942153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.942205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.942221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.947673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.947718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.947752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.953100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.953145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.953166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.958547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.958592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.958614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.963986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.964037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.964068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.969534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.969587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.969603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.975044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.975091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.975109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.980518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.980560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.980577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.985961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.986016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.986031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.991372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.991429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.991462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:14.996799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:14.996846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:14.996864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:15.002119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:15.002165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:15.002184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:15.007525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:15.007589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:15.007621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:15.012967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:15.013018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:15.013049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:15.018338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:15.018388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:15.018404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:15.023856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:15.023902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:15.023933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:15.029172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:15.029216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:15.029250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:15.034670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:15.034731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:15.034748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:15.040170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:15.040221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.071 [2024-11-17 08:25:15.040251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.071 [2024-11-17 08:25:15.045540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.071 [2024-11-17 08:25:15.045581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.072 [2024-11-17 08:25:15.045600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.072 [2024-11-17 08:25:15.050951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.072 [2024-11-17 08:25:15.050998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.072 [2024-11-17 08:25:15.051017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.072 [2024-11-17 08:25:15.056399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.072 [2024-11-17 08:25:15.056485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.072 [2024-11-17 08:25:15.056501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.072 [2024-11-17 08:25:15.061896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.072 [2024-11-17 08:25:15.062087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.072 [2024-11-17 08:25:15.062107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.072 [2024-11-17 08:25:15.067419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.072 [2024-11-17 08:25:15.067479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.072 [2024-11-17 08:25:15.067499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.072 [2024-11-17 08:25:15.072886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.072 [2024-11-17 08:25:15.073071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.072 [2024-11-17 08:25:15.073097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.072 [2024-11-17 08:25:15.078718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.072 [2024-11-17 08:25:15.078766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.072 [2024-11-17 08:25:15.078785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.084286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.084492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.084513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.089950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.090000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.090014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.095452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.095503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.095522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.100781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.100823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.100841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.106075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.106127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.106142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.111479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.111530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.111545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.116804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.116848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.116866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.122172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.122215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.122233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.127581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.127640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.127655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.132872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.132926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.132941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.138226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.138387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.138412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.143758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.143804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.143822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.149063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.149108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.149125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.154358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.154552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.154572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.159939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.159990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.160004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.165356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.165401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.165419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.170761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.170807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.170825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.176015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.176067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.176082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.181324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.332 [2024-11-17 08:25:15.181532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.332 [2024-11-17 08:25:15.181552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.332 [2024-11-17 08:25:15.186934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.186981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.186999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.192270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.192315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.192333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.197604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.197655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.197670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.202875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.202925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.202940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.208144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.208303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.208327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.213527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.213572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.213590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.218858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.218909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.218924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.224115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.224282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.224301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.229577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.229630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.229645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.234822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.234989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.235014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.240313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.240360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.240378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.245549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.245595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.245609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.250783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.250834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.250849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.256073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.256126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.256143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.261424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.261480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.261498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.266753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.266805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.266821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.272044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.272095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.272109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.277243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.277407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.277435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.282679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.282725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.282743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.287940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.287992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.288006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.293178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.293340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.293359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.298570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.298610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.298629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.303888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.304051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.304076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.309363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.309407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.309425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.314681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.314865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.314884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.320156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.320207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.320221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.333 [2024-11-17 08:25:15.325521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.333 [2024-11-17 08:25:15.325566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.333 [2024-11-17 08:25:15.325589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.334 [2024-11-17 08:25:15.330771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.334 [2024-11-17 08:25:15.330816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.334 [2024-11-17 08:25:15.330834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.334 [2024-11-17 08:25:15.335953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.334 [2024-11-17 08:25:15.336004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.334 [2024-11-17 08:25:15.336019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.341596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.341645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.341660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.347039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.347085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.347103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.352509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.352551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.352569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.357800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.357850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.357865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.363126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.363180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.363195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.368593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.368637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.368655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.373929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.373973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.373993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.379274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.379329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.379344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.384623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.384671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.384686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.389867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.389918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.389932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.395178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.395332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.395357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.400734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.400779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.400796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.406017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.406067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.406082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.411352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.411403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.411418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.416660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.416705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.416722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.421883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.422060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.422088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.427393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.427464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.427492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.432746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.432921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.432941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.594 [2024-11-17 08:25:15.438239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.594 [2024-11-17 08:25:15.438284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.594 [2024-11-17 08:25:15.438321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.443598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.443643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.443660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.448916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.448969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.448984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.454192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.454242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.454256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.459547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.459593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.459611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.464844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.464888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.464905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.470175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.470217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.470238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.475489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.475542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.475557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.480769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.480818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.480833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.485942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.485987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.486004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.491227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.491389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.491414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.496596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.496654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.496668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.501891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.501945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.501959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.507176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.507339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.507366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.512754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.512799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.512817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.518129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.518180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.518212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.523520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.523565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.523580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.528826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.528872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.528887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.534079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.534125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.534139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.539422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.539480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.539496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.544708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.544753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.544768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.549979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.550023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.550037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.555351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.555525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.555545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.560728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.560773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.560787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.565953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.565998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.566012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.571370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.571548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.571567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.576914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.595 [2024-11-17 08:25:15.576959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.595 [2024-11-17 08:25:15.576974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.595 [2024-11-17 08:25:15.582120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.596 [2024-11-17 08:25:15.582166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.596 [2024-11-17 08:25:15.582180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.596 [2024-11-17 08:25:15.587374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.596 [2024-11-17 08:25:15.587564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.596 [2024-11-17 08:25:15.587583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.596 [2024-11-17 08:25:15.592937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.596 [2024-11-17 08:25:15.592983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.596 [2024-11-17 08:25:15.592998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.596 [2024-11-17 08:25:15.598241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.596 [2024-11-17 08:25:15.598285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.596 [2024-11-17 08:25:15.598299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.856 [2024-11-17 08:25:15.603801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.856 [2024-11-17 08:25:15.603845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.856 [2024-11-17 08:25:15.603860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.856 [2024-11-17 08:25:15.609170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.856 [2024-11-17 08:25:15.609215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.856 [2024-11-17 08:25:15.609229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.856 [2024-11-17 08:25:15.614601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.856 [2024-11-17 08:25:15.614649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.856 [2024-11-17 08:25:15.614664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.856 [2024-11-17 08:25:15.619992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.856 [2024-11-17 08:25:15.620036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.856 [2024-11-17 08:25:15.620052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.856 [2024-11-17 08:25:15.625297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.856 [2024-11-17 08:25:15.625340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.856 [2024-11-17 08:25:15.625354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.856 [2024-11-17 08:25:15.630590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.856 [2024-11-17 08:25:15.630644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.856 [2024-11-17 08:25:15.630659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.856 [2024-11-17 08:25:15.635826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.856 [2024-11-17 08:25:15.635871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.856 [2024-11-17 08:25:15.635884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.856 [2024-11-17 08:25:15.641064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.856 [2024-11-17 08:25:15.641230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.856 [2024-11-17 08:25:15.641250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.856 [2024-11-17 08:25:15.646519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.856 [2024-11-17 08:25:15.646558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.856 [2024-11-17 08:25:15.646572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.856 [2024-11-17 08:25:15.651739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.856 [2024-11-17 08:25:15.651920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.856 [2024-11-17 08:25:15.651941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.856 [2024-11-17 08:25:15.657286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.657332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.657347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.662607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.662654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.662669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.667974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.668019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.668034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.673324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.673369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.673383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.678647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.678693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.678708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.683965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.684012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.684028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.689282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.689326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.689340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.694618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.694664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.694678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.699868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.699912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.699926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.705062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.705105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.705120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.710315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.710507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.710550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.715833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.715879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.715893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.721011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.721054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.721069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.726260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.726433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.726475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.731774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.731820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.731834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.737016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.737060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.737074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.742271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.742438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.742479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.747735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.747781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.747795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.752944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.752990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.753004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.758405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.758469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.758485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.763835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.763877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.763890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.769269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.769316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.769331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.774889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.774931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.774945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.780269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.780319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.780334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.785705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.785751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.785766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.791154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.791202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.857 [2024-11-17 08:25:15.791218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.857 [2024-11-17 08:25:15.796571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.857 [2024-11-17 08:25:15.796625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.796640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.858 [2024-11-17 08:25:15.801979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.858 [2024-11-17 08:25:15.802024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.802038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.858 [2024-11-17 08:25:15.807325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.858 [2024-11-17 08:25:15.807373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.807388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.858 [2024-11-17 08:25:15.812646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.858 [2024-11-17 08:25:15.812690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.812704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.858 [2024-11-17 08:25:15.817944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.858 [2024-11-17 08:25:15.817989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.818003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.858 [2024-11-17 08:25:15.823314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.858 [2024-11-17 08:25:15.823510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.823530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.858 [2024-11-17 08:25:15.828722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.858 [2024-11-17 08:25:15.828767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.828781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.858 [2024-11-17 08:25:15.833969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.858 [2024-11-17 08:25:15.834014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.834028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.858 [2024-11-17 08:25:15.839267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.858 [2024-11-17 08:25:15.839453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.839474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.858 [2024-11-17 08:25:15.844738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.858 [2024-11-17 08:25:15.844784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.844798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.858 5704.00 IOPS, 713.00 MiB/s [2024-11-17T08:25:15.870Z] [2024-11-17 08:25:15.850354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.858 [2024-11-17 08:25:15.850404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.850418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.858 [2024-11-17 08:25:15.855731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.858 [2024-11-17 08:25:15.855777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.855791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.858 [2024-11-17 08:25:15.861010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.858 [2024-11-17 08:25:15.861054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.858 [2024-11-17 08:25:15.861068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.866515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.866560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.866574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.871993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.872037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.872051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.877313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.877359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.877373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.882646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.882690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.882705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.887972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.888016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.888031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.893341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.893528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.893547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.898738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.898784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.898800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.904034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.904079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.904093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.909214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.909374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.909394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.914673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.914719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.914732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.919851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.919895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.919909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.925169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.925331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.925350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.930551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.930591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.930605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.935786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.935831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.935844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.940986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.941158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.941177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.946401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.946471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.946488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.951633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.951816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.951835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.957090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.957137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.957152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.962343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.962389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.962404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.967722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.967768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.967783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.973087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.973130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.973145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.978368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.978414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.978428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.983709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.119 [2024-11-17 08:25:15.983754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.119 [2024-11-17 08:25:15.983768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.119 [2024-11-17 08:25:15.988901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:15.988946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:15.988960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:15.994146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:15.994309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:15.994328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:15.999482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:15.999525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:15.999539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.004728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.004892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.004912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.010101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.010148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.010163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.015328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.015374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.015389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.020563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.020608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.020623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.025819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.025863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.025877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.031027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.031076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.031090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.036232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.036407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.036426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.041757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.041803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.041817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.046988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.047033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.047047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.052280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.052440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.052474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.057825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.057868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.057883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.063221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.063269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.063284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.068691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.068736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.068750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.074071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.074117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.074132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.079430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.079489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.079503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.084900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.084948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.084963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.090349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.090395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.090409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.095795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.095977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.095998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.101406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.101464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.101480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.106802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.106981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.107001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.112259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.112307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.112322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.117722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.117901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.117921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.120 [2024-11-17 08:25:16.123309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.120 [2024-11-17 08:25:16.123360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.120 [2024-11-17 08:25:16.123376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.381 [2024-11-17 08:25:16.128948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.381 [2024-11-17 08:25:16.129127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.381 [2024-11-17 08:25:16.129146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.381 [2024-11-17 08:25:16.134806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.381 [2024-11-17 08:25:16.134852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.381 [2024-11-17 08:25:16.134868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.381 [2024-11-17 08:25:16.140284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.381 [2024-11-17 08:25:16.140449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.381 [2024-11-17 08:25:16.140483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.381 [2024-11-17 08:25:16.145852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.381 [2024-11-17 08:25:16.145898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.381 [2024-11-17 08:25:16.145912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.381 [2024-11-17 08:25:16.151329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.381 [2024-11-17 08:25:16.151375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.381 [2024-11-17 08:25:16.151390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.381 [2024-11-17 08:25:16.156719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.381 [2024-11-17 08:25:16.156765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.156779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.161981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.162021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.162034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.167366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.167410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.167424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.172807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.172847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.172861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.178175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.178225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.178239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.183535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.183574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.183588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.188801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.188841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.188855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.194198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.194237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.194251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.199498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.199536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.199550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.204832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.204874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.204890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.210214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.210253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.210267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.215618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.215658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.215672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.220999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.221039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.221053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.226352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.226393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.226407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.231726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.231768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.231782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.237029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.237070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.237084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.242402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.242453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.242474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.247849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.247888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.247902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.253120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.253161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.253174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.258470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.258512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.258526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.263844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.263884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.263898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.269056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.269096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.269110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.274257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.274297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.274311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.279498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.279537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.382 [2024-11-17 08:25:16.279550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.382 [2024-11-17 08:25:16.284753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.382 [2024-11-17 08:25:16.284794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.284824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.289995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.290035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.290049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.295241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.295283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.295297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.300602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.300641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.300655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.305774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.305814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.305827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.311005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.311046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.311061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.316257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.316296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.316310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.321497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.321536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.321550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.326698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.326738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.326753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.331987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.332029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.332043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.337262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.337302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.337316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.342627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.342668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.342682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.347893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.347933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.347946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.353066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.353106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.353119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.358234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.358273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.358287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.363434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.363482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.363496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.368729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.368769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.368783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.374010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.374049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.374063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.379218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.379259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.379273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.384389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.384430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.384459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.383 [2024-11-17 08:25:16.389896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.383 [2024-11-17 08:25:16.389935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.383 [2024-11-17 08:25:16.389949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.644 [2024-11-17 08:25:16.395462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.644 [2024-11-17 08:25:16.395512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.644 [2024-11-17 08:25:16.395542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.644 [2024-11-17 08:25:16.400997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.644 [2024-11-17 08:25:16.401037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.644 [2024-11-17 08:25:16.401051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.644 [2024-11-17 08:25:16.406296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.644 [2024-11-17 08:25:16.406336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.644 [2024-11-17 08:25:16.406349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.644 [2024-11-17 08:25:16.411650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.644 [2024-11-17 08:25:16.411690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.644 [2024-11-17 08:25:16.411704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.644 [2024-11-17 08:25:16.416885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.644 [2024-11-17 08:25:16.416925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.644 [2024-11-17 08:25:16.416939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.644 [2024-11-17 08:25:16.422124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.644 [2024-11-17 08:25:16.422164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.644 [2024-11-17 08:25:16.422178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.644 [2024-11-17 08:25:16.427414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.644 [2024-11-17 08:25:16.427466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.644 [2024-11-17 08:25:16.427481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.644 [2024-11-17 08:25:16.432685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.644 [2024-11-17 08:25:16.432726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.644 [2024-11-17 08:25:16.432740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.644 [2024-11-17 08:25:16.437915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.644 [2024-11-17 08:25:16.437955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.644 [2024-11-17 08:25:16.437969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.644 [2024-11-17 08:25:16.443111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.644 [2024-11-17 08:25:16.443152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.644 [2024-11-17 08:25:16.443166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.644 [2024-11-17 08:25:16.448302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.644 [2024-11-17 08:25:16.448343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.644 [2024-11-17 08:25:16.448356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.644 [2024-11-17 08:25:16.453592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.453630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.453643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.458771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.458812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.458826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.464044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.464084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.464098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.469268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.469310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.469323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.474495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.474532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.474546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.479703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.479742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.479756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.484876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.484918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.484933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.490070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.490110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.490123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.495346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.495387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.495402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.500596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.500635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.500649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.505886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.505924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.505938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.511090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.511131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.511144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.516253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.516293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.516307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.521510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.521549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.521563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.526816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.526857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.526870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.531987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.532027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.532041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.537197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.537237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.537251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.542520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.542558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.542573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.547801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.547841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.547855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.553172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.553213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.553227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.558398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.558449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.558471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.563604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.563643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.563657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.568786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.568826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.568839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.645 [2024-11-17 08:25:16.574112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.645 [2024-11-17 08:25:16.574150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.645 [2024-11-17 08:25:16.574163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.579370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.579410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.579424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.584599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.584638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.584651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.589891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.589932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.589946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.595293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.595334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.595348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.600486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.600525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.600539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.605837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.605878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.605891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.611011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.611053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.611066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.616234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.616274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.616287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.621470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.621521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.621534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.626707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.626748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.626763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.631955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.631996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.632026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.637262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.637302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.637332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.642657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.642698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.642711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.647849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.647889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.647904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.646 [2024-11-17 08:25:16.653267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.646 [2024-11-17 08:25:16.653307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.646 [2024-11-17 08:25:16.653321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.906 [2024-11-17 08:25:16.658857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.658899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.658913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.664285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.664324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.664337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.669722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.669763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.669777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.674978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.675020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.675034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.680351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.680391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.680404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.685699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.685739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.685769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.691033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.691074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.691087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.696273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.696312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.696326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.701570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.701609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.701623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.706870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.706911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.706925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.712137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.712178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.712191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.717371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.717411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.717425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.722586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.722624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.722639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.727703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.727743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.727757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.732864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.732903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.732918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.737980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.738020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.738033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.743309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.743350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.743365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.748620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.748658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.748688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.753837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.753877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.753907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.759136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.759176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.759191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.764422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.907 [2024-11-17 08:25:16.764487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.907 [2024-11-17 08:25:16.764502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.907 [2024-11-17 08:25:16.769693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.769733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.769763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.775003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.775042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.775056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.780189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.780228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.780242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.785569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.785611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.785625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.790907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.790948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.790962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.796216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.796257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.796271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.801650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.801688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.801702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.807143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.807185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.807199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.812575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.812615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.812628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.817919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.817960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.817990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.823402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.823457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.823473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.828793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.828833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.828847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.834152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.834192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.834207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.839493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.839533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.839547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.908 [2024-11-17 08:25:16.844845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.908 [2024-11-17 08:25:16.844884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.908 [2024-11-17 08:25:16.844898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:11.908 5758.00 IOPS, 719.75 MiB/s 00:25:11.908 Latency(us) 00:25:11.908 [2024-11-17T08:25:16.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.908 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:11.908 nvme0n1 : 2.00 5758.26 719.78 0.00 0.00 2776.05 2500.37 11738.58 00:25:11.908 [2024-11-17T08:25:16.920Z] =================================================================================================================== 00:25:11.908 [2024-11-17T08:25:16.920Z] Total : 5758.26 719.78 0.00 0.00 2776.05 2500.37 11738.58 00:25:11.908 { 00:25:11.908 "results": [ 00:25:11.908 { 00:25:11.908 "job": "nvme0n1", 00:25:11.908 "core_mask": "0x2", 00:25:11.908 "workload": "randread", 00:25:11.908 "status": "finished", 00:25:11.908 "queue_depth": 16, 00:25:11.908 "io_size": 131072, 00:25:11.908 "runtime": 2.002689, 00:25:11.908 "iops": 5758.2580220893005, 00:25:11.908 "mibps": 719.7822527611626, 00:25:11.908 "io_failed": 0, 00:25:11.908 "io_timeout": 0, 00:25:11.908 "avg_latency_us": 2776.0539620848986, 00:25:11.908 "min_latency_us": 2500.3694779116468, 00:25:11.908 "max_latency_us": 11738.576706827309 00:25:11.908 } 00:25:11.908 ], 00:25:11.908 "core_count": 1 00:25:11.908 } 00:25:11.908 08:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:11.908 08:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:11.908 08:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:11.908 | .driver_specific 00:25:11.908 | .nvme_error 00:25:11.908 | .status_code 00:25:11.908 | .command_transient_transport_error' 00:25:11.908 08:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:12.168 08:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 372 > 0 )) 00:25:12.168 08:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87002 00:25:12.168 08:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 87002 ']' 00:25:12.168 08:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 87002 00:25:12.168 08:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:12.168 08:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.168 08:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87002 00:25:12.168 08:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:12.168 killing process with pid 87002 00:25:12.168 08:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:12.168 08:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87002' 00:25:12.168 08:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 87002 00:25:12.168 Received shutdown signal, test time was about 2.000000 seconds 00:25:12.168 00:25:12.168 Latency(us) 00:25:12.168 [2024-11-17T08:25:17.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.168 [2024-11-17T08:25:17.180Z] =================================================================================================================== 00:25:12.168 [2024-11-17T08:25:17.180Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.168 08:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 87002 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87069 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87069 /var/tmp/bperf.sock 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 87069 ']' 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:13.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.547 08:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.547 [2024-11-17 08:25:18.285110] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:13.547 [2024-11-17 08:25:18.285254] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87069 ] 00:25:13.547 [2024-11-17 08:25:18.472790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.806 [2024-11-17 08:25:18.581192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.806 [2024-11-17 08:25:18.779998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:14.375 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.375 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:14.375 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:14.375 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:14.375 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:14.375 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.375 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:14.375 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.375 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.375 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.634 nvme0n1 00:25:14.634 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:14.634 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.634 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:14.634 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.634 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:14.634 08:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:14.894 Running I/O for 2 seconds... 00:25:14.894 [2024-11-17 08:25:19.693480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7100 00:25:14.894 [2024-11-17 08:25:19.695013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.695080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.707822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7970 00:25:14.894 [2024-11-17 08:25:19.709291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.709345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.721782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:25:14.894 [2024-11-17 08:25:19.723248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.723294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.735829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:25:14.894 [2024-11-17 08:25:19.737272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.737317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.749783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf92c0 00:25:14.894 [2024-11-17 08:25:19.751206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.751262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.763940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9b30 00:25:14.894 [2024-11-17 08:25:19.765325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.765369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.777822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa3a0 00:25:14.894 [2024-11-17 08:25:19.779216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.779261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.791812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfac10 00:25:14.894 [2024-11-17 08:25:19.793155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.793207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.805797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:25:14.894 [2024-11-17 08:25:19.807145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.807192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.819748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:25:14.894 [2024-11-17 08:25:19.821089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.821133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.833949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc560 00:25:14.894 [2024-11-17 08:25:19.835250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.835307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.847890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:25:14.894 [2024-11-17 08:25:19.849180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.849225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.861900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:25:14.894 [2024-11-17 08:25:19.863189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.863234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.875675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfdeb0 00:25:14.894 [2024-11-17 08:25:19.876927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.894 [2024-11-17 08:25:19.876978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:14.894 [2024-11-17 08:25:19.889762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:25:14.895 [2024-11-17 08:25:19.891020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.895 [2024-11-17 08:25:19.891065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:19.904202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bff3c8 00:25:15.155 [2024-11-17 08:25:19.905497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:19.905536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:19.924715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bff3c8 00:25:15.155 [2024-11-17 08:25:19.927047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:19.927091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:19.938784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:25:15.155 [2024-11-17 08:25:19.940989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:19.941040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:19.952719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfdeb0 00:25:15.155 [2024-11-17 08:25:19.954971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:19.955014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:19.966529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:25:15.155 [2024-11-17 08:25:19.968694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:19.968735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:19.980262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:25:15.155 [2024-11-17 08:25:19.982505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:19.982556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:19.993896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc560 00:25:15.155 [2024-11-17 08:25:19.996027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:19.996068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:20.008207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:25:15.155 [2024-11-17 08:25:20.010388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:20.010431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:20.022700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:25:15.155 [2024-11-17 08:25:20.024852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:20.024903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:20.037243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfac10 00:25:15.155 [2024-11-17 08:25:20.039413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:20.039469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:20.051459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa3a0 00:25:15.155 [2024-11-17 08:25:20.053670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:20.053712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:20.066178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9b30 00:25:15.155 [2024-11-17 08:25:20.068310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:20.068360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:20.080487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf92c0 00:25:15.155 [2024-11-17 08:25:20.082643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:20.082689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:20.094556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:25:15.155 [2024-11-17 08:25:20.096626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:20.096669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:20.108510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:25:15.155 [2024-11-17 08:25:20.110520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:20.110572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:20.122297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7970 00:25:15.155 [2024-11-17 08:25:20.124370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:20.124414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:20.136223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7100 00:25:15.155 [2024-11-17 08:25:20.138241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:20.138283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:20.149860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:25:15.155 [2024-11-17 08:25:20.151878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.155 [2024-11-17 08:25:20.151926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.155 [2024-11-17 08:25:20.163821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:25:15.416 [2024-11-17 08:25:20.165943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.165985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.178039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:25:15.416 [2024-11-17 08:25:20.180044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.180086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.191818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:25:15.416 [2024-11-17 08:25:20.193711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.193760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.205514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:25:15.416 [2024-11-17 08:25:20.207471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.207515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.219182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:25:15.416 [2024-11-17 08:25:20.221144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.221186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.233093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf35f0 00:25:15.416 [2024-11-17 08:25:20.234987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.235037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.246772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2d80 00:25:15.416 [2024-11-17 08:25:20.248661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.248703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.260431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:25:15.416 [2024-11-17 08:25:20.262303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.262346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.274073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:25:15.416 [2024-11-17 08:25:20.275936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.275986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.287809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:25:15.416 [2024-11-17 08:25:20.289597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.289639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.301367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0bc0 00:25:15.416 [2024-11-17 08:25:20.303227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.303272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.315091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:25:15.416 [2024-11-17 08:25:20.316916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.316970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.328716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:25:15.416 [2024-11-17 08:25:20.330444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.330499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.342262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:25:15.416 [2024-11-17 08:25:20.344053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.344096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.356013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:25:15.416 [2024-11-17 08:25:20.357861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.357914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.369900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee190 00:25:15.416 [2024-11-17 08:25:20.371533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.371575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.383503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed920 00:25:15.416 [2024-11-17 08:25:20.385079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.385270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.397345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed0b0 00:25:15.416 [2024-11-17 08:25:20.399170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.399355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.411492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:25:15.416 [2024-11-17 08:25:20.413223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.416 [2024-11-17 08:25:20.413393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.416 [2024-11-17 08:25:20.425769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebfd0 00:25:15.676 [2024-11-17 08:25:20.427744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.676 [2024-11-17 08:25:20.427925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.676 [2024-11-17 08:25:20.440322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb760 00:25:15.676 [2024-11-17 08:25:20.442074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.676 [2024-11-17 08:25:20.442252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.676 [2024-11-17 08:25:20.454708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaef0 00:25:15.676 [2024-11-17 08:25:20.456362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.676 [2024-11-17 08:25:20.456544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.676 [2024-11-17 08:25:20.468836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea680 00:25:15.676 [2024-11-17 08:25:20.470477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.676 [2024-11-17 08:25:20.470663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.676 [2024-11-17 08:25:20.482868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:25:15.676 [2024-11-17 08:25:20.484475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.676 [2024-11-17 08:25:20.484654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.676 [2024-11-17 08:25:20.496935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be95a0 00:25:15.676 [2024-11-17 08:25:20.498577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.676 [2024-11-17 08:25:20.498748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.676 [2024-11-17 08:25:20.511048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8d30 00:25:15.676 [2024-11-17 08:25:20.512691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.676 [2024-11-17 08:25:20.512736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.676 [2024-11-17 08:25:20.524893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:25:15.676 [2024-11-17 08:25:20.526320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.676 [2024-11-17 08:25:20.526495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.676 [2024-11-17 08:25:20.538606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:25:15.676 [2024-11-17 08:25:20.540014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.677 [2024-11-17 08:25:20.540058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.677 [2024-11-17 08:25:20.552186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:25:15.677 [2024-11-17 08:25:20.553658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.677 [2024-11-17 08:25:20.553833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.677 [2024-11-17 08:25:20.566189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6b70 00:25:15.677 [2024-11-17 08:25:20.567793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.677 [2024-11-17 08:25:20.567845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.677 [2024-11-17 08:25:20.580080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:25:15.677 [2024-11-17 08:25:20.581438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.677 [2024-11-17 08:25:20.581486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.677 [2024-11-17 08:25:20.593641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:25:15.677 [2024-11-17 08:25:20.595039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.677 [2024-11-17 08:25:20.595083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.677 [2024-11-17 08:25:20.607346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5220 00:25:15.677 [2024-11-17 08:25:20.608765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.677 [2024-11-17 08:25:20.608818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.677 [2024-11-17 08:25:20.621097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:25:15.677 [2024-11-17 08:25:20.622628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.677 [2024-11-17 08:25:20.622672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.677 [2024-11-17 08:25:20.634873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4140 00:25:15.677 [2024-11-17 08:25:20.636171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.677 [2024-11-17 08:25:20.636320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.677 [2024-11-17 08:25:20.648584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be38d0 00:25:15.677 [2024-11-17 08:25:20.649854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.677 [2024-11-17 08:25:20.649906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.677 [2024-11-17 08:25:20.662339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3060 00:25:15.677 [2024-11-17 08:25:20.663751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.677 [2024-11-17 08:25:20.663796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.677 17965.00 IOPS, 70.18 MiB/s [2024-11-17T08:25:20.689Z] [2024-11-17 08:25:20.676311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be27f0 00:25:15.677 [2024-11-17 08:25:20.677629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.677 [2024-11-17 08:25:20.677673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.937 [2024-11-17 08:25:20.690691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1f80 00:25:15.937 [2024-11-17 08:25:20.692042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.937 [2024-11-17 08:25:20.692097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.937 [2024-11-17 08:25:20.704686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:25:15.937 [2024-11-17 08:25:20.705948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.937 [2024-11-17 08:25:20.705989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.937 [2024-11-17 08:25:20.718500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:25:15.937 [2024-11-17 08:25:20.719702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.937 [2024-11-17 08:25:20.719744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.937 [2024-11-17 08:25:20.732161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:25:15.937 [2024-11-17 08:25:20.733564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.937 [2024-11-17 08:25:20.733606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.937 [2024-11-17 08:25:20.745929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdfdc0 00:25:15.937 [2024-11-17 08:25:20.747157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.937 [2024-11-17 08:25:20.747200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.937 [2024-11-17 08:25:20.759624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf550 00:25:15.937 [2024-11-17 08:25:20.760779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.937 [2024-11-17 08:25:20.760821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.938 [2024-11-17 08:25:20.773318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:25:15.938 [2024-11-17 08:25:20.774562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.938 [2024-11-17 08:25:20.774614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.938 [2024-11-17 08:25:20.787093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:25:15.938 [2024-11-17 08:25:20.788417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.938 [2024-11-17 08:25:20.788459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.938 [2024-11-17 08:25:20.806546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bddc00 00:25:15.938 [2024-11-17 08:25:20.808674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.938 [2024-11-17 08:25:20.808716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.938 [2024-11-17 08:25:20.820288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:25:15.938 [2024-11-17 08:25:20.822529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.938 [2024-11-17 08:25:20.822575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.938 [2024-11-17 08:25:20.834037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:25:15.938 [2024-11-17 08:25:20.836332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.938 [2024-11-17 08:25:20.836395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.938 [2024-11-17 08:25:20.847967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf550 00:25:15.938 [2024-11-17 08:25:20.850040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.938 [2024-11-17 08:25:20.850082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.938 [2024-11-17 08:25:20.862019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdfdc0 00:25:15.938 [2024-11-17 08:25:20.864234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.938 [2024-11-17 08:25:20.864386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.938 [2024-11-17 08:25:20.876391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:25:15.938 [2024-11-17 08:25:20.878631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.938 [2024-11-17 08:25:20.878797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.938 [2024-11-17 08:25:20.890865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:25:15.938 [2024-11-17 08:25:20.893071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.938 [2024-11-17 08:25:20.893124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.938 [2024-11-17 08:25:20.905190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:25:15.938 [2024-11-17 08:25:20.907533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.938 [2024-11-17 08:25:20.907575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.938 [2024-11-17 08:25:20.919525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1f80 00:25:15.938 [2024-11-17 08:25:20.921799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.938 [2024-11-17 08:25:20.921854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.938 [2024-11-17 08:25:20.934036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be27f0 00:25:15.938 [2024-11-17 08:25:20.936198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.938 [2024-11-17 08:25:20.936357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:16.198 [2024-11-17 08:25:20.948899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3060 00:25:16.198 [2024-11-17 08:25:20.951105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.198 [2024-11-17 08:25:20.951251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:16.198 [2024-11-17 08:25:20.963620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be38d0 00:25:16.198 [2024-11-17 08:25:20.965610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.198 [2024-11-17 08:25:20.965659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:16.198 [2024-11-17 08:25:20.977570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4140 00:25:16.198 [2024-11-17 08:25:20.979588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.198 [2024-11-17 08:25:20.979635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:16.198 [2024-11-17 08:25:20.991717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:25:16.198 [2024-11-17 08:25:20.993721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.198 [2024-11-17 08:25:20.993761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:16.198 [2024-11-17 08:25:21.005765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5220 00:25:16.198 [2024-11-17 08:25:21.007781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.198 [2024-11-17 08:25:21.007828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:16.198 [2024-11-17 08:25:21.019768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:25:16.198 [2024-11-17 08:25:21.021726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.198 [2024-11-17 08:25:21.021769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:16.198 [2024-11-17 08:25:21.033718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:25:16.198 [2024-11-17 08:25:21.035668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.198 [2024-11-17 08:25:21.035705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.198 [2024-11-17 08:25:21.047725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6b70 00:25:16.198 [2024-11-17 08:25:21.049670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.198 [2024-11-17 08:25:21.049717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:16.198 [2024-11-17 08:25:21.061898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:25:16.198 [2024-11-17 08:25:21.063810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.198 [2024-11-17 08:25:21.063853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:16.198 [2024-11-17 08:25:21.075749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:25:16.198 [2024-11-17 08:25:21.077572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.198 [2024-11-17 08:25:21.077610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:16.198 [2024-11-17 08:25:21.089460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:25:16.198 [2024-11-17 08:25:21.091306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.198 [2024-11-17 08:25:21.091353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:16.199 [2024-11-17 08:25:21.103230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8d30 00:25:16.199 [2024-11-17 08:25:21.105043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.199 [2024-11-17 08:25:21.105087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:16.199 [2024-11-17 08:25:21.116898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be95a0 00:25:16.199 [2024-11-17 08:25:21.118701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.199 [2024-11-17 08:25:21.118740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:16.199 [2024-11-17 08:25:21.130568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:25:16.199 [2024-11-17 08:25:21.132311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.199 [2024-11-17 08:25:21.132359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.199 [2024-11-17 08:25:21.144686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea680 00:25:16.199 [2024-11-17 08:25:21.146401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.199 [2024-11-17 08:25:21.146470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:16.199 [2024-11-17 08:25:21.158274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaef0 00:25:16.199 [2024-11-17 08:25:21.160047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.199 [2024-11-17 08:25:21.160097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:16.199 [2024-11-17 08:25:21.171982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb760 00:25:16.199 [2024-11-17 08:25:21.173683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.199 [2024-11-17 08:25:21.173730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:16.199 [2024-11-17 08:25:21.185567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebfd0 00:25:16.199 [2024-11-17 08:25:21.187241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.199 [2024-11-17 08:25:21.187285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:16.199 [2024-11-17 08:25:21.199170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:25:16.199 [2024-11-17 08:25:21.200893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.199 [2024-11-17 08:25:21.200932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.213493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed0b0 00:25:16.459 [2024-11-17 08:25:21.215298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.215354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.227405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed920 00:25:16.459 [2024-11-17 08:25:21.229068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.229118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.241072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee190 00:25:16.459 [2024-11-17 08:25:21.242738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.242776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.254699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:25:16.459 [2024-11-17 08:25:21.256289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.256336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.268459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:25:16.459 [2024-11-17 08:25:21.270034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.270077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.282161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:25:16.459 [2024-11-17 08:25:21.283893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.283932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.295965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:25:16.459 [2024-11-17 08:25:21.297514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.297562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.309691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0bc0 00:25:16.459 [2024-11-17 08:25:21.311228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.311271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.323355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:25:16.459 [2024-11-17 08:25:21.324963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.325001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.337151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:25:16.459 [2024-11-17 08:25:21.338705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.338751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.350885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:25:16.459 [2024-11-17 08:25:21.352366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.352409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.364511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2d80 00:25:16.459 [2024-11-17 08:25:21.365985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.366023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.378102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf35f0 00:25:16.459 [2024-11-17 08:25:21.379601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.379651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.391815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:25:16.459 [2024-11-17 08:25:21.393247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.393291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.405420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:25:16.459 [2024-11-17 08:25:21.406903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.406942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.419118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:25:16.459 [2024-11-17 08:25:21.420555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.420601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.432867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:25:16.459 [2024-11-17 08:25:21.434250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.459 [2024-11-17 08:25:21.434294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:16.459 [2024-11-17 08:25:21.446483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:25:16.460 [2024-11-17 08:25:21.447861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.460 [2024-11-17 08:25:21.447900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:16.460 [2024-11-17 08:25:21.460131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:25:16.460 [2024-11-17 08:25:21.461498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.460 [2024-11-17 08:25:21.461544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:16.719 [2024-11-17 08:25:21.474387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7100 00:25:16.719 [2024-11-17 08:25:21.475879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.719 [2024-11-17 08:25:21.475922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.719 [2024-11-17 08:25:21.488425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7970 00:25:16.719 [2024-11-17 08:25:21.489765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.719 [2024-11-17 08:25:21.489803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-17 08:25:21.501980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:25:16.719 [2024-11-17 08:25:21.503298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.719 [2024-11-17 08:25:21.503337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:16.719 [2024-11-17 08:25:21.515719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:25:16.720 [2024-11-17 08:25:21.517010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-17 08:25:21.517049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:16.720 [2024-11-17 08:25:21.529567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf92c0 00:25:16.720 [2024-11-17 08:25:21.530893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-17 08:25:21.530934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:16.720 [2024-11-17 08:25:21.543685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9b30 00:25:16.720 [2024-11-17 08:25:21.544944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-17 08:25:21.544983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:16.720 [2024-11-17 08:25:21.557641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa3a0 00:25:16.720 [2024-11-17 08:25:21.558961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-17 08:25:21.559001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:16.720 [2024-11-17 08:25:21.571648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfac10 00:25:16.720 [2024-11-17 08:25:21.572956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-17 08:25:21.572995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.720 [2024-11-17 08:25:21.585648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:25:16.720 [2024-11-17 08:25:21.586934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-17 08:25:21.586973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:16.720 [2024-11-17 08:25:21.599653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:25:16.720 [2024-11-17 08:25:21.600849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-17 08:25:21.600888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:16.720 [2024-11-17 08:25:21.613403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc560 00:25:16.720 [2024-11-17 08:25:21.614625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-17 08:25:21.614663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:16.720 [2024-11-17 08:25:21.627292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:25:16.720 [2024-11-17 08:25:21.628510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-17 08:25:21.628549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:16.720 [2024-11-17 08:25:21.641135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:25:16.720 [2024-11-17 08:25:21.642286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-17 08:25:21.642325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:16.720 [2024-11-17 08:25:21.655060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfdeb0 00:25:16.720 [2024-11-17 08:25:21.656230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-17 08:25:21.656269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:16.720 [2024-11-17 08:25:21.668997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:25:16.720 [2024-11-17 08:25:21.670114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-17 08:25:21.670153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:16.720 18090.50 IOPS, 70.67 MiB/s 00:25:16.720 Latency(us) 00:25:16.720 [2024-11-17T08:25:21.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.720 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:16.720 nvme0n1 : 2.00 18112.00 70.75 0.00 0.00 7061.64 6158.80 26214.40 00:25:16.720 [2024-11-17T08:25:21.732Z] =================================================================================================================== 00:25:16.720 [2024-11-17T08:25:21.732Z] Total : 18112.00 70.75 0.00 0.00 7061.64 6158.80 26214.40 00:25:16.720 { 00:25:16.720 "results": [ 00:25:16.720 { 00:25:16.720 "job": "nvme0n1", 00:25:16.720 "core_mask": "0x2", 00:25:16.720 "workload": "randwrite", 00:25:16.720 "status": "finished", 00:25:16.720 "queue_depth": 128, 00:25:16.720 "io_size": 4096, 00:25:16.720 "runtime": 2.004693, 00:25:16.720 "iops": 18112.000191550527, 00:25:16.720 "mibps": 70.75000074824425, 00:25:16.720 "io_failed": 0, 00:25:16.720 "io_timeout": 0, 00:25:16.720 "avg_latency_us": 7061.641054930013, 00:25:16.720 "min_latency_us": 6158.804819277108, 00:25:16.720 "max_latency_us": 26214.4 00:25:16.720 } 00:25:16.720 ], 00:25:16.720 "core_count": 1 00:25:16.720 } 00:25:16.720 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:16.720 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:16.720 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:16.720 | .driver_specific 00:25:16.720 | .nvme_error 00:25:16.720 | .status_code 00:25:16.720 | .command_transient_transport_error' 00:25:16.720 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:16.979 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:25:16.979 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87069 00:25:16.979 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 87069 ']' 00:25:16.979 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 87069 00:25:16.979 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:16.979 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.979 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87069 00:25:16.979 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:16.979 killing process with pid 87069 00:25:16.979 Received shutdown signal, test time was about 2.000000 seconds 00:25:16.979 00:25:16.979 Latency(us) 00:25:16.979 [2024-11-17T08:25:21.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.979 [2024-11-17T08:25:21.991Z] =================================================================================================================== 00:25:16.979 [2024-11-17T08:25:21.991Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.979 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:16.979 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87069' 00:25:16.979 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 87069 00:25:16.979 08:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 87069 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87130 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87130 /var/tmp/bperf.sock 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 87130 ']' 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:17.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.916 08:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:18.175 [2024-11-17 08:25:23.005672] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:18.175 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:18.175 Zero copy mechanism will not be used. 00:25:18.175 [2024-11-17 08:25:23.005795] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87130 ] 00:25:18.435 [2024-11-17 08:25:23.191968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.435 [2024-11-17 08:25:23.300110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.694 [2024-11-17 08:25:23.498954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:18.953 08:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.953 08:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:18.953 08:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:18.953 08:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:19.211 08:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:19.211 08:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.211 08:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:19.211 08:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.211 08:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.211 08:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.471 nvme0n1 00:25:19.471 08:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:19.471 08:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.471 08:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:19.471 08:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.471 08:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:19.471 08:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:19.471 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:19.471 Zero copy mechanism will not be used. 00:25:19.471 Running I/O for 2 seconds... 00:25:19.471 [2024-11-17 08:25:24.360096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.360206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.360253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.365682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.365852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.365895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.370697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.370966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.371010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.375946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.376115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.376145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.381033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.381189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.381219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.386140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.386391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.386429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.391356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.391524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.391565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.396404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.396574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.396604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.401546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.401683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.401713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.406546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.406680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.406718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.411467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.411616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.411654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.416362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.416522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.416551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.421368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.421539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.421569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.426344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.426587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.426630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.431696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.431851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.431889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.436610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.436762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.436792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.441605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.471 [2024-11-17 08:25:24.441732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.471 [2024-11-17 08:25:24.441762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.471 [2024-11-17 08:25:24.446601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.472 [2024-11-17 08:25:24.446751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.472 [2024-11-17 08:25:24.446790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.472 [2024-11-17 08:25:24.451605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.472 [2024-11-17 08:25:24.451772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.472 [2024-11-17 08:25:24.451812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.472 [2024-11-17 08:25:24.456582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.472 [2024-11-17 08:25:24.456766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.472 [2024-11-17 08:25:24.456796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.472 [2024-11-17 08:25:24.461565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.472 [2024-11-17 08:25:24.461714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.472 [2024-11-17 08:25:24.461743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.472 [2024-11-17 08:25:24.466764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.472 [2024-11-17 08:25:24.466925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.472 [2024-11-17 08:25:24.466968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.472 [2024-11-17 08:25:24.471926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.472 [2024-11-17 08:25:24.472044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.472 [2024-11-17 08:25:24.472082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.472 [2024-11-17 08:25:24.477076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.472 [2024-11-17 08:25:24.477324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.472 [2024-11-17 08:25:24.477355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.732 [2024-11-17 08:25:24.482795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.732 [2024-11-17 08:25:24.482955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.732 [2024-11-17 08:25:24.482990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.732 [2024-11-17 08:25:24.488132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.732 [2024-11-17 08:25:24.488290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.732 [2024-11-17 08:25:24.488350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.732 [2024-11-17 08:25:24.493323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.732 [2024-11-17 08:25:24.493541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.732 [2024-11-17 08:25:24.493571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.732 [2024-11-17 08:25:24.498742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.732 [2024-11-17 08:25:24.498931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.732 [2024-11-17 08:25:24.498968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.503880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.504021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.504069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.509017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.509232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.509279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.514290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.514477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.514523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.519419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.519619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.519648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.524534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.524659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.524698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.529601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.529759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.529804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.534845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.535003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.535032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.539971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.540197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.540226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.545203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.545348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.545387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.550392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.550610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.550657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.555486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.555643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.555672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.560662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.560839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.560869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.565741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.565885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.565924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.570821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.571075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.571125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.575674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.575847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.575876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.580654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.580973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.581008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.585762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.586199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.586243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.591134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.591457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.591508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.596346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.596674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.596710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.601434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.601768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.601804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.606567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.606867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.606913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.611697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.612013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.612067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.616814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.733 [2024-11-17 08:25:24.617286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.733 [2024-11-17 08:25:24.617317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.733 [2024-11-17 08:25:24.622026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.622352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.622388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.627203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.627533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.627578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.632337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.632798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.632849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.637685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.638029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.638066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.642796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.643125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.643161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.647948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.648405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.648447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.653333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.653647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.653702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.658405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.658760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.658796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.663699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.664015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.664055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.668881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.669190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.669231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.673979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.674301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.674338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.679120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.679583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.679626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.684464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.684773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.684820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.689503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.689808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.689853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.694541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.694871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.694907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.699608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.699932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.699968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.704623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.704929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.704974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.709714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.710022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.710078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.714744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.715076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.715113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.719711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.720027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.720063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.724714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.725019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.725063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.729747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.730049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.730094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.734886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.735217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.735253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.734 [2024-11-17 08:25:24.740059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.734 [2024-11-17 08:25:24.740525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.734 [2024-11-17 08:25:24.740556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.745570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.745879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.745926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.750979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.751294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.751352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.756066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.756531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.756563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.761282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.761608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.761639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.766275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.766607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.766651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.771334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.771670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.771720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.776511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.776832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.776867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.781601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.781916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.781953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.786605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.786907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.786960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.791576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.791886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.791935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.796499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.796818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.796859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.801527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.801834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.801869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.806544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.806861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.806906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.811589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.811895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.811941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.816537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.995 [2024-11-17 08:25:24.816858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.995 [2024-11-17 08:25:24.816895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.995 [2024-11-17 08:25:24.821505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.821815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.821851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.826454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.826771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.826816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.831493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.831809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.831854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.836506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.836827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.836863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.841491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.841801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.841836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.846514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.846822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.846870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.851594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.851899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.851945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.856623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.856944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.856981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.861627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.861942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.861978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.866678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.866985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.867038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.871719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.872025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.872075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.876621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.876941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.876977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.881609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.881924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.881960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.886639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.886942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.886986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.891616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.891916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.891972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.896528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.896853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.896889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.901540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.901839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.901874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.906158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.906226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.906262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.911129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.911202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.911245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.916047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.916256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.916286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.996 [2024-11-17 08:25:24.921177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.996 [2024-11-17 08:25:24.921260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.996 [2024-11-17 08:25:24.921289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.926091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.926164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.926201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.931072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.931146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.931192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.935977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.936176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.936206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.941126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.941205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.941234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.946099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.946168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.946203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.951111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.951324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.951363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.956346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.956436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.956482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.961262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.961341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.961371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.966203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.966289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.966327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.971212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.971391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.971429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.976294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.976389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.976419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.981270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.981348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.981377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.986274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.986396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.986431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.991231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.991426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.991490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:24.996341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:24.996428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:24.996472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.997 [2024-11-17 08:25:25.001541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:19.997 [2024-11-17 08:25:25.001639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.997 [2024-11-17 08:25:25.001669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.257 [2024-11-17 08:25:25.006989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.257 [2024-11-17 08:25:25.007195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.257 [2024-11-17 08:25:25.007236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.257 [2024-11-17 08:25:25.012313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.257 [2024-11-17 08:25:25.012387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.257 [2024-11-17 08:25:25.012424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.257 [2024-11-17 08:25:25.017276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.257 [2024-11-17 08:25:25.017363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.257 [2024-11-17 08:25:25.017393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.257 [2024-11-17 08:25:25.022258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.257 [2024-11-17 08:25:25.022446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.022497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.027372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.027472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.027521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.032403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.032481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.032532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.037312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.037392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.037422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.042355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.042617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.042647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.047604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.047681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.047720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.052711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.052785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.052824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.057756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.057839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.057868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.062951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.063037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.063065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.068110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.068212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.068257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.073341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.073429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.073486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.078534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.078642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.078671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.083689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.083792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.083821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.088757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.088831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.088869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.093794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.093864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.093912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.098752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.098874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.098905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.103798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.103878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.103907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.108715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.108783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.108821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.113735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.113808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.113846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.118662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.118744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.118772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.123641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.123721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.123751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.128525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.128595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.128642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.133488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.133557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.133597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.138338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.138436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.258 [2024-11-17 08:25:25.138489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.258 [2024-11-17 08:25:25.143329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.258 [2024-11-17 08:25:25.143413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.143453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.148258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.148328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.148374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.153271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.153475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.153514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.158291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.158381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.158411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.163375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.163469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.163502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.168475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.168547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.168596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.173368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.173454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.173495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.178356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.178462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.178501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.183339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.183417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.183464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.188263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.188471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.188529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.193350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.193424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.193488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.198347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.198430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.198484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.203409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.203513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.203544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.208262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.208471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.208528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.213378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.213492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.213529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.218314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.218405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.218435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.223342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.223421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.223464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.228316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.229093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.229144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.233691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.233764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.233806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.238596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.238672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.238703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.243453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.243523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.243552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.248403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.248492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.248522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.253380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.253464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.253494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.258295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.258366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.258395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.259 [2024-11-17 08:25:25.263375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.259 [2024-11-17 08:25:25.263595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.259 [2024-11-17 08:25:25.263625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.268932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.269007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.269036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.274112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.274206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.274236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.279200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.279384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.279414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.284328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.284401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.284431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.289356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.289455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.289485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.294342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.294411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.294452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.299295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.299509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.299548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.304416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.304503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.304533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.309360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.309433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.309477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.314357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.314429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.314478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.319381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.319619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.319649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.324558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.324644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.324673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.329368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.329450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.329480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.334369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.334454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.334492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.339320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.339535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.339570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.344422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.344514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.344544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.349311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.349395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.349424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.520 6052.00 IOPS, 756.50 MiB/s [2024-11-17T08:25:25.532Z] [2024-11-17 08:25:25.355136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.355338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.355370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.360184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.360259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.360289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.365177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.520 [2024-11-17 08:25:25.365254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.520 [2024-11-17 08:25:25.365283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.520 [2024-11-17 08:25:25.370112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.370186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.370215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.375092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.375285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.375314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.380193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.380267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.380297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.385131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.385203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.385231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.389999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.390068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.390099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.394889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.395099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.395129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.399955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.400045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.400076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.404959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.405039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.405068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.409884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.410071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.410101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.414933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.415027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.415057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.419941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.420013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.420043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.424793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.424876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.424905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.429731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.429802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.429832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.434644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.434735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.434764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.439516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.439583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.439612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.444363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.444434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.444480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.449272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.449472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.449505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.454351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.454423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.454480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.459313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.459383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.459414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.464289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.464360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.464390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.469242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.521 [2024-11-17 08:25:25.469434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.521 [2024-11-17 08:25:25.469477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.521 [2024-11-17 08:25:25.474309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.522 [2024-11-17 08:25:25.474398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.522 [2024-11-17 08:25:25.474428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.522 [2024-11-17 08:25:25.479270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.522 [2024-11-17 08:25:25.479343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.522 [2024-11-17 08:25:25.479372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.522 [2024-11-17 08:25:25.484252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.522 [2024-11-17 08:25:25.484322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.522 [2024-11-17 08:25:25.484350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.522 [2024-11-17 08:25:25.489202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.522 [2024-11-17 08:25:25.489396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.522 [2024-11-17 08:25:25.489426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.522 [2024-11-17 08:25:25.494267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.522 [2024-11-17 08:25:25.494349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.522 [2024-11-17 08:25:25.494378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.522 [2024-11-17 08:25:25.499235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.522 [2024-11-17 08:25:25.499309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.522 [2024-11-17 08:25:25.499354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.522 [2024-11-17 08:25:25.504182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.522 [2024-11-17 08:25:25.504257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.522 [2024-11-17 08:25:25.504286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.522 [2024-11-17 08:25:25.509178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.522 [2024-11-17 08:25:25.509356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.522 [2024-11-17 08:25:25.509387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.522 [2024-11-17 08:25:25.514188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.522 [2024-11-17 08:25:25.514262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.522 [2024-11-17 08:25:25.514291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.522 [2024-11-17 08:25:25.519143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.522 [2024-11-17 08:25:25.519229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.522 [2024-11-17 08:25:25.519260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.522 [2024-11-17 08:25:25.524073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.522 [2024-11-17 08:25:25.524145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.522 [2024-11-17 08:25:25.524175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.782 [2024-11-17 08:25:25.529366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.782 [2024-11-17 08:25:25.529568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.782 [2024-11-17 08:25:25.529598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.782 [2024-11-17 08:25:25.534617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.782 [2024-11-17 08:25:25.534734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.782 [2024-11-17 08:25:25.534763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.782 [2024-11-17 08:25:25.539760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.782 [2024-11-17 08:25:25.539833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.782 [2024-11-17 08:25:25.539862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.782 [2024-11-17 08:25:25.544654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.782 [2024-11-17 08:25:25.544728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.782 [2024-11-17 08:25:25.544757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.782 [2024-11-17 08:25:25.549667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.782 [2024-11-17 08:25:25.549739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.782 [2024-11-17 08:25:25.549768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.782 [2024-11-17 08:25:25.554559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.782 [2024-11-17 08:25:25.554631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.782 [2024-11-17 08:25:25.554662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.782 [2024-11-17 08:25:25.559492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.782 [2024-11-17 08:25:25.559564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.782 [2024-11-17 08:25:25.559594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.782 [2024-11-17 08:25:25.564341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.564415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.564444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.569315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.569530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.569561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.574436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.574547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.574577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.579312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.579387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.579417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.584279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.584428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.584458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.589299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.589520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.589551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.594391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.594488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.594543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.599412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.599515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.599546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.604335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.604416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.604461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.609328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.609546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.609575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.614395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.614514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.614544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.619350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.619421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.619464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.624217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.624290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.624320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.629136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.629333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.629362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.634186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.634259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.634288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.639133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.639206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.639235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.644065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.644149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.644178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.649039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.649228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.649259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.654091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.654168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.654198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.783 [2024-11-17 08:25:25.659034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.783 [2024-11-17 08:25:25.659109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.783 [2024-11-17 08:25:25.659139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.664072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.664262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.664293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.669180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.669254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.669283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.674281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.674360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.674390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.679394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.679485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.679516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.684487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.684570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.684598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.689586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.689660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.689689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.694678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.694768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.694798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.699831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.699910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.699941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.705026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.705102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.705132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.710142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.710357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.710388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.715487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.715565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.715595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.720625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.720696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.720726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.725676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.725751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.725781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.730755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.730830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.730860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.735865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.735936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.735983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.740953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.741151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.741181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.746110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.746187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.746217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.751211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.751283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.751312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.756308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.756551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.756581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.761422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.761504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.784 [2024-11-17 08:25:25.761534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.784 [2024-11-17 08:25:25.766343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.784 [2024-11-17 08:25:25.766412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.785 [2024-11-17 08:25:25.766443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.785 [2024-11-17 08:25:25.771462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.785 [2024-11-17 08:25:25.771556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.785 [2024-11-17 08:25:25.771586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.785 [2024-11-17 08:25:25.776570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.785 [2024-11-17 08:25:25.776641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.785 [2024-11-17 08:25:25.776670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:20.785 [2024-11-17 08:25:25.781547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.785 [2024-11-17 08:25:25.781635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.785 [2024-11-17 08:25:25.781665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.785 [2024-11-17 08:25:25.786585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:20.785 [2024-11-17 08:25:25.786657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.785 [2024-11-17 08:25:25.786688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.045 [2024-11-17 08:25:25.791867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.045 [2024-11-17 08:25:25.791951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.045 [2024-11-17 08:25:25.791980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.045 [2024-11-17 08:25:25.797129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.045 [2024-11-17 08:25:25.797216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.045 [2024-11-17 08:25:25.797246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.045 [2024-11-17 08:25:25.802309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.045 [2024-11-17 08:25:25.802549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.045 [2024-11-17 08:25:25.802579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.045 [2024-11-17 08:25:25.807589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.045 [2024-11-17 08:25:25.807688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.045 [2024-11-17 08:25:25.807717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.045 [2024-11-17 08:25:25.812707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.045 [2024-11-17 08:25:25.812780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.045 [2024-11-17 08:25:25.812810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.045 [2024-11-17 08:25:25.817742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.045 [2024-11-17 08:25:25.817828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.045 [2024-11-17 08:25:25.817857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.045 [2024-11-17 08:25:25.822804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.045 [2024-11-17 08:25:25.822882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.822912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.827899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.827972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.828001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.832863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.832936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.832965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.837905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.838133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.838164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.843188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.843265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.843295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.848196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.848269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.848299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.853316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.853536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.853565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.858428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.858539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.858569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.863444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.863532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.863563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.868567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.868649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.868678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.873519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.873589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.873619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.878597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.878670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.878700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.883598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.883672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.883718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.888620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.888693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.888722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.893586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.893657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.893687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.898568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.898643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.898675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.903542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.903633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.903663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.908690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.908762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.908790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.913660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.913725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.913754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.918607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.918674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.918705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.923482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.923549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.923579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.928411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.928490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.928520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.933301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.933364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.933392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.938324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.938391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.938421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.943265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.943330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.943359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.948111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.948201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.948231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.953126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.046 [2024-11-17 08:25:25.953195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.046 [2024-11-17 08:25:25.953225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.046 [2024-11-17 08:25:25.958013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:25.958092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:25.958122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:25.962967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:25.963034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:25.963064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:25.967915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:25.967993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:25.968021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:25.972825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:25.972904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:25.972934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:25.977828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:25.977895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:25.977925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:25.982721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:25.982787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:25.982817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:25.987682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:25.987747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:25.987776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:25.992598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:25.992663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:25.992692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:25.997663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:25.997742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:25.997770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:26.002854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:26.002924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:26.002955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:26.007941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:26.008023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:26.008053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:26.013026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:26.013124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:26.013154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:26.018112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:26.018182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:26.018210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:26.023128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:26.023198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:26.023227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:26.028105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:26.028172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:26.028201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:26.033056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:26.033122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:26.033150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:26.037993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:26.038067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:26.038096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:26.042907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:26.042970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:26.042998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:26.047807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:26.047872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:26.047902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.047 [2024-11-17 08:25:26.052875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.047 [2024-11-17 08:25:26.052956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.047 [2024-11-17 08:25:26.052986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.058176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.058256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.058285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.063498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.063578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.063608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.068350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.068432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.068474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.073323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.073397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.073426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.078352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.078453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.078491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.083364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.083436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.083482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.088432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.088548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.088576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.093558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.093642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.093670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.098663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.098728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.098758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.103800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.103871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.103899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.108901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.108972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.109001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.113923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.114006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.114034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.119058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.119129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.119159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.124138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.124217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.124246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.129070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.129135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.129165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.134011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.134077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.134107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.138918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.138983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.139012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.143808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.143887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.143916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.148680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.148747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.148778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.153493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.153561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.153591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.158397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.158500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.158531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.163218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.163286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.163316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.168103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.308 [2024-11-17 08:25:26.168168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.308 [2024-11-17 08:25:26.168196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.308 [2024-11-17 08:25:26.173021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.173088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.173116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.177945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.178009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.178038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.182839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.182904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.182934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.187717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.187783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.187812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.192583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.192655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.192684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.197534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.197600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.197631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.202537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.202602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.202632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.207414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.207507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.207537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.212325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.212403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.212431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.217173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.217239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.217269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.222115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.222180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.222210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.227010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.227095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.227126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.231913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.231986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.232015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.236883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.236959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.236988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.241820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.241885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.241914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.246661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.246724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.246753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.251558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.251634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.251662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.256429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.256508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.256537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.261311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.261378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.261406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.266187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.266253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.266282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.271095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.271160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.271190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.275983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.276048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.276078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.280887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.280952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.280981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.285751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.285816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.285845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.290624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.290694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.290723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.295534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.295611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.295640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.300368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.309 [2024-11-17 08:25:26.300471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.309 [2024-11-17 08:25:26.300502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.309 [2024-11-17 08:25:26.305262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.310 [2024-11-17 08:25:26.305328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.310 [2024-11-17 08:25:26.305357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.310 [2024-11-17 08:25:26.310288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.310 [2024-11-17 08:25:26.310353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.310 [2024-11-17 08:25:26.310382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.310 [2024-11-17 08:25:26.315451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.310 [2024-11-17 08:25:26.315537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.310 [2024-11-17 08:25:26.315567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.569 [2024-11-17 08:25:26.320681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.569 [2024-11-17 08:25:26.320752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.569 [2024-11-17 08:25:26.320781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.569 [2024-11-17 08:25:26.325834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.569 [2024-11-17 08:25:26.325901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.569 [2024-11-17 08:25:26.325931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.569 [2024-11-17 08:25:26.330872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.569 [2024-11-17 08:25:26.330941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.569 [2024-11-17 08:25:26.330972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.569 [2024-11-17 08:25:26.335792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.569 [2024-11-17 08:25:26.335869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.569 [2024-11-17 08:25:26.335899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.569 [2024-11-17 08:25:26.340730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.569 [2024-11-17 08:25:26.340805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.569 [2024-11-17 08:25:26.340834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:21.569 [2024-11-17 08:25:26.345613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.569 [2024-11-17 08:25:26.345680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.569 [2024-11-17 08:25:26.345710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:21.569 6117.50 IOPS, 764.69 MiB/s [2024-11-17T08:25:26.581Z] [2024-11-17 08:25:26.351515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:21.569 [2024-11-17 08:25:26.351602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.569 [2024-11-17 08:25:26.351633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:21.569 00:25:21.569 Latency(us) 00:25:21.569 [2024-11-17T08:25:26.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.569 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:21.569 nvme0n1 : 2.00 6116.13 764.52 0.00 0.00 2611.06 1710.78 10791.07 00:25:21.569 [2024-11-17T08:25:26.581Z] =================================================================================================================== 00:25:21.569 [2024-11-17T08:25:26.581Z] Total : 6116.13 764.52 0.00 0.00 2611.06 1710.78 10791.07 00:25:21.569 { 00:25:21.569 "results": [ 00:25:21.569 { 00:25:21.569 "job": "nvme0n1", 00:25:21.569 "core_mask": "0x2", 00:25:21.569 "workload": "randwrite", 00:25:21.569 "status": "finished", 00:25:21.569 "queue_depth": 16, 00:25:21.569 "io_size": 131072, 00:25:21.569 "runtime": 2.003881, 00:25:21.569 "iops": 6116.1316465398895, 00:25:21.569 "mibps": 764.5164558174862, 00:25:21.569 "io_failed": 0, 00:25:21.569 "io_timeout": 0, 00:25:21.569 "avg_latency_us": 2611.061486677782, 00:25:21.569 "min_latency_us": 1710.7791164658634, 00:25:21.569 "max_latency_us": 10791.068273092369 00:25:21.569 } 00:25:21.569 ], 00:25:21.569 "core_count": 1 00:25:21.569 } 00:25:21.569 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:21.569 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:21.569 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:21.569 | .driver_specific 00:25:21.569 | .nvme_error 00:25:21.569 | .status_code 00:25:21.569 | .command_transient_transport_error' 00:25:21.569 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:21.827 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 396 > 0 )) 00:25:21.827 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87130 00:25:21.827 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 87130 ']' 00:25:21.827 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 87130 00:25:21.827 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:21.827 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:21.827 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87130 00:25:21.828 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:21.828 killing process with pid 87130 00:25:21.828 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:21.828 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87130' 00:25:21.828 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 87130 00:25:21.828 Received shutdown signal, test time was about 2.000000 seconds 00:25:21.828 00:25:21.828 Latency(us) 00:25:21.828 [2024-11-17T08:25:26.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.828 [2024-11-17T08:25:26.840Z] =================================================================================================================== 00:25:21.828 [2024-11-17T08:25:26.840Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.828 08:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 87130 00:25:22.763 08:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 86903 00:25:22.763 08:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86903 ']' 00:25:22.763 08:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86903 00:25:22.763 08:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:22.763 08:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.763 08:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86903 00:25:22.763 08:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:22.763 killing process with pid 86903 00:25:22.763 08:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:22.763 08:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86903' 00:25:22.763 08:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86903 00:25:22.763 08:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86903 00:25:24.140 00:25:24.140 real 0m21.643s 00:25:24.140 user 0m38.093s 00:25:24.140 sys 0m6.293s 00:25:24.140 08:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.140 08:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.140 ************************************ 00:25:24.140 END TEST nvmf_digest_error 00:25:24.140 ************************************ 00:25:24.140 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:24.140 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:24.140 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:24.140 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:24.140 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:24.140 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:24.140 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:24.140 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:24.140 rmmod nvme_tcp 00:25:24.140 rmmod nvme_fabrics 00:25:24.140 rmmod nvme_keyring 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 86903 ']' 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 86903 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 86903 ']' 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 86903 00:25:24.398 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86903) - No such process 00:25:24.398 Process with pid 86903 is not found 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 86903 is not found' 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:24.398 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:25:24.656 00:25:24.656 real 0m46.026s 00:25:24.656 user 1m19.146s 00:25:24.656 sys 0m13.411s 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:24.656 ************************************ 00:25:24.656 END TEST nvmf_digest 00:25:24.656 ************************************ 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.656 ************************************ 00:25:24.656 START TEST nvmf_host_multipath 00:25:24.656 ************************************ 00:25:24.656 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:24.917 * Looking for test storage... 00:25:24.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:24.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.917 --rc genhtml_branch_coverage=1 00:25:24.917 --rc genhtml_function_coverage=1 00:25:24.917 --rc genhtml_legend=1 00:25:24.917 --rc geninfo_all_blocks=1 00:25:24.917 --rc geninfo_unexecuted_blocks=1 00:25:24.917 00:25:24.917 ' 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:24.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.917 --rc genhtml_branch_coverage=1 00:25:24.917 --rc genhtml_function_coverage=1 00:25:24.917 --rc genhtml_legend=1 00:25:24.917 --rc geninfo_all_blocks=1 00:25:24.917 --rc geninfo_unexecuted_blocks=1 00:25:24.917 00:25:24.917 ' 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:24.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.917 --rc genhtml_branch_coverage=1 00:25:24.917 --rc genhtml_function_coverage=1 00:25:24.917 --rc genhtml_legend=1 00:25:24.917 --rc geninfo_all_blocks=1 00:25:24.917 --rc geninfo_unexecuted_blocks=1 00:25:24.917 00:25:24.917 ' 00:25:24.917 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:24.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.917 --rc genhtml_branch_coverage=1 00:25:24.917 --rc genhtml_function_coverage=1 00:25:24.917 --rc genhtml_legend=1 00:25:24.918 --rc geninfo_all_blocks=1 00:25:24.918 --rc geninfo_unexecuted_blocks=1 00:25:24.918 00:25:24.918 ' 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:24.918 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:24.918 Cannot find device "nvmf_init_br" 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:24.918 Cannot find device "nvmf_init_br2" 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:25:24.918 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:25.206 Cannot find device "nvmf_tgt_br" 00:25:25.206 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:25:25.206 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:25.206 Cannot find device "nvmf_tgt_br2" 00:25:25.206 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:25:25.206 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:25.206 Cannot find device "nvmf_init_br" 00:25:25.206 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:25:25.206 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:25.206 Cannot find device "nvmf_init_br2" 00:25:25.206 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:25:25.206 08:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:25.206 Cannot find device "nvmf_tgt_br" 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:25.206 Cannot find device "nvmf_tgt_br2" 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:25.206 Cannot find device "nvmf_br" 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:25.206 Cannot find device "nvmf_init_if" 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:25.206 Cannot find device "nvmf_init_if2" 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:25.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:25.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:25.206 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:25.476 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:25.476 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:25:25.476 00:25:25.476 --- 10.0.0.3 ping statistics --- 00:25:25.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.476 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:25.476 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:25.476 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:25:25.476 00:25:25.476 --- 10.0.0.4 ping statistics --- 00:25:25.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.476 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:25.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:25:25.476 00:25:25.476 --- 10.0.0.1 ping statistics --- 00:25:25.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.476 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:25.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:25:25.476 00:25:25.476 --- 10.0.0.2 ping statistics --- 00:25:25.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.476 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:25.476 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:25.477 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=87471 00:25:25.477 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:25.477 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 87471 00:25:25.477 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 87471 ']' 00:25:25.477 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.477 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.477 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.477 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.477 08:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:25.736 [2024-11-17 08:25:30.571566] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:25.736 [2024-11-17 08:25:30.571712] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.996 [2024-11-17 08:25:30.761294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:25.996 [2024-11-17 08:25:30.890770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.996 [2024-11-17 08:25:30.890832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.996 [2024-11-17 08:25:30.890849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.996 [2024-11-17 08:25:30.890872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.996 [2024-11-17 08:25:30.890888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.996 [2024-11-17 08:25:30.893163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.996 [2024-11-17 08:25:30.893198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.255 [2024-11-17 08:25:31.134533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:26.514 08:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.514 08:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:25:26.514 08:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:26.514 08:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:26.514 08:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:26.514 08:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.514 08:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=87471 00:25:26.514 08:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:26.772 [2024-11-17 08:25:31.610872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.772 08:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:27.030 Malloc0 00:25:27.030 08:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:27.289 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:27.548 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:27.548 [2024-11-17 08:25:32.522485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:27.549 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:27.808 [2024-11-17 08:25:32.706269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:27.808 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=87521 00:25:27.808 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:27.808 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:27.808 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 87521 /var/tmp/bdevperf.sock 00:25:27.808 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 87521 ']' 00:25:27.808 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:27.808 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:27.808 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:27.808 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.808 08:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:28.745 08:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.745 08:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:25:28.745 08:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:29.004 08:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:29.263 Nvme0n1 00:25:29.263 08:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:29.521 Nvme0n1 00:25:29.521 08:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:29.521 08:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:25:30.899 08:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:25:30.899 08:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:30.899 08:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:30.899 08:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:25:30.899 08:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87471 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:30.899 08:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87566 00:25:30.899 08:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:37.467 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:37.467 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:37.467 Attaching 4 probes... 00:25:37.467 @path[10.0.0.3, 4421]: 16937 00:25:37.467 @path[10.0.0.3, 4421]: 17240 00:25:37.467 @path[10.0.0.3, 4421]: 17225 00:25:37.467 @path[10.0.0.3, 4421]: 17190 00:25:37.467 @path[10.0.0.3, 4421]: 17241 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87566 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:37.467 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:37.725 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:37.725 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87471 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:37.725 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87674 00:25:37.725 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:44.295 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:44.295 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:44.295 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:44.295 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:44.295 Attaching 4 probes... 00:25:44.295 @path[10.0.0.3, 4420]: 18337 00:25:44.295 @path[10.0.0.3, 4420]: 18594 00:25:44.295 @path[10.0.0.3, 4420]: 18761 00:25:44.295 @path[10.0.0.3, 4420]: 18675 00:25:44.295 @path[10.0.0.3, 4420]: 18669 00:25:44.295 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:44.295 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:44.295 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:44.295 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:44.295 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:44.295 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:44.295 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87674 00:25:44.295 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:44.296 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:44.296 08:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:44.296 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:44.296 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:44.296 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87792 00:25:44.296 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87471 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:44.296 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:50.863 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:50.864 Attaching 4 probes... 00:25:50.864 @path[10.0.0.3, 4421]: 12268 00:25:50.864 @path[10.0.0.3, 4421]: 16883 00:25:50.864 @path[10.0.0.3, 4421]: 17050 00:25:50.864 @path[10.0.0.3, 4421]: 17053 00:25:50.864 @path[10.0.0.3, 4421]: 16996 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87792 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87903 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:50.864 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87471 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:57.559 08:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:57.559 08:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:57.559 Attaching 4 probes... 00:25:57.559 00:25:57.559 00:25:57.559 00:25:57.559 00:25:57.559 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87903 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88017 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:57.559 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87471 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:04.142 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:04.143 Attaching 4 probes... 00:26:04.143 @path[10.0.0.3, 4421]: 16452 00:26:04.143 @path[10.0.0.3, 4421]: 16711 00:26:04.143 @path[10.0.0.3, 4421]: 16776 00:26:04.143 @path[10.0.0.3, 4421]: 16723 00:26:04.143 @path[10.0.0.3, 4421]: 16728 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88017 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:04.143 08:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:26:05.079 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:26:05.079 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88136 00:26:05.079 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:05.079 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87471 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:11.651 08:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:11.651 08:26:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:11.651 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:26:11.651 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:11.651 Attaching 4 probes... 00:26:11.651 @path[10.0.0.3, 4420]: 17479 00:26:11.651 @path[10.0.0.3, 4420]: 17832 00:26:11.651 @path[10.0.0.3, 4420]: 17887 00:26:11.651 @path[10.0.0.3, 4420]: 17919 00:26:11.651 @path[10.0.0.3, 4420]: 17887 00:26:11.652 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:11.652 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:11.652 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:11.652 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:26:11.652 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:11.652 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:11.652 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88136 00:26:11.652 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:11.652 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:11.652 [2024-11-17 08:26:16.398620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:11.652 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:11.652 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:26:18.223 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:26:18.223 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88313 00:26:18.223 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:18.223 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87471 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:24.795 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:24.795 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:24.795 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:24.795 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:24.795 Attaching 4 probes... 00:26:24.795 @path[10.0.0.3, 4421]: 16650 00:26:24.795 @path[10.0.0.3, 4421]: 17234 00:26:24.795 @path[10.0.0.3, 4421]: 17177 00:26:24.795 @path[10.0.0.3, 4421]: 17210 00:26:24.795 @path[10.0.0.3, 4421]: 17221 00:26:24.795 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88313 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 87521 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 87521 ']' 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 87521 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87521 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:24.796 killing process with pid 87521 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87521' 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 87521 00:26:24.796 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 87521 00:26:24.796 { 00:26:24.796 "results": [ 00:26:24.796 { 00:26:24.796 "job": "Nvme0n1", 00:26:24.796 "core_mask": "0x4", 00:26:24.796 "workload": "verify", 00:26:24.796 "status": "terminated", 00:26:24.796 "verify_range": { 00:26:24.796 "start": 0, 00:26:24.796 "length": 16384 00:26:24.796 }, 00:26:24.796 "queue_depth": 128, 00:26:24.796 "io_size": 4096, 00:26:24.796 "runtime": 54.450748, 00:26:24.796 "iops": 7385.775490173248, 00:26:24.796 "mibps": 28.85068550848925, 00:26:24.796 "io_failed": 0, 00:26:24.796 "io_timeout": 0, 00:26:24.796 "avg_latency_us": 17314.34212763537, 00:26:24.796 "min_latency_us": 1171.225702811245, 00:26:24.796 "max_latency_us": 7061253.963052209 00:26:24.796 } 00:26:24.796 ], 00:26:24.796 "core_count": 1 00:26:24.796 } 00:26:25.071 08:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 87521 00:26:25.072 08:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:25.072 [2024-11-17 08:25:32.824970] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:25.072 [2024-11-17 08:25:32.825116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87521 ] 00:26:25.072 [2024-11-17 08:25:32.999177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.072 [2024-11-17 08:25:33.136785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.072 [2024-11-17 08:25:33.372540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:25.072 Running I/O for 90 seconds... 00:26:25.072 7446.00 IOPS, 29.09 MiB/s [2024-11-17T08:26:30.084Z] 7986.00 IOPS, 31.20 MiB/s [2024-11-17T08:26:30.084Z] 8193.33 IOPS, 32.01 MiB/s [2024-11-17T08:26:30.084Z] 8297.00 IOPS, 32.41 MiB/s [2024-11-17T08:26:30.084Z] 8359.20 IOPS, 32.65 MiB/s [2024-11-17T08:26:30.084Z] 8395.33 IOPS, 32.79 MiB/s [2024-11-17T08:26:30.084Z] 8425.71 IOPS, 32.91 MiB/s [2024-11-17T08:26:30.084Z] [2024-11-17 08:25:42.528517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.528618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.528720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.528742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.528768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.528785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.528809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.528827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.528850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.528868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.528892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.528909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.528933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.528950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.528974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.528992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.529038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.529098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.529139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.529179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.529218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.529258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.529297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.529337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.072 [2024-11-17 08:25:42.529378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.072 [2024-11-17 08:25:42.529422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.072 [2024-11-17 08:25:42.529487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.072 [2024-11-17 08:25:42.529527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.072 [2024-11-17 08:25:42.529566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.072 [2024-11-17 08:25:42.529607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.072 [2024-11-17 08:25:42.529654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.529677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.072 [2024-11-17 08:25:42.529694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.530106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.530142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.530176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.530193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.530217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.530234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.072 [2024-11-17 08:25:42.530257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.072 [2024-11-17 08:25:42.530274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.530315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.530355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.530395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.530435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.530502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.530552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.530605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.530646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.530691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.530731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.530771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.530810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.530850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.530890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.530931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.530971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.530987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.531011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.531028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.531051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.531068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.531090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.531114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.531138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.531155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.531178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.531195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.531219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.531236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.531259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.531276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.531299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.531315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.531338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.531355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.531378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.531395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.531418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.531434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.531469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.073 [2024-11-17 08:25:42.531486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.532184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.532219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.532261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.532283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.532308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.532339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.532363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.532380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.532403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.532420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.532456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.532475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.073 [2024-11-17 08:25:42.532498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.073 [2024-11-17 08:25:42.532514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.532537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.532554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.532577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.532594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.532618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.532636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.532659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.532676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.532699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.532716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.532739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.532756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.532778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.532795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.532818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.532835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.532865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.532882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.532905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.532922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.532945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.532962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.532985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.533521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.074 [2024-11-17 08:25:42.533538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.535011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.535047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.535081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.535099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.535123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.535140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.535165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.535182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.535206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.535223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.535247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.535265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.535288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.535305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.074 [2024-11-17 08:25:42.535329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.074 [2024-11-17 08:25:42.535362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.075 [2024-11-17 08:25:42.535405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.075 [2024-11-17 08:25:42.535447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.075 [2024-11-17 08:25:42.535500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.075 [2024-11-17 08:25:42.535543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.075 [2024-11-17 08:25:42.535583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.075 [2024-11-17 08:25:42.535626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.075 [2024-11-17 08:25:42.535667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.075 [2024-11-17 08:25:42.535708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.535759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.535799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.535839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.535901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.535949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.535973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.535990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.536030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.536071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.536111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.536150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.536190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.536230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.536270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.536309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.536348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.075 [2024-11-17 08:25:42.536394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.075 [2024-11-17 08:25:42.536440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.075 [2024-11-17 08:25:42.536493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.075 [2024-11-17 08:25:42.536533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.075 [2024-11-17 08:25:42.536572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.075 [2024-11-17 08:25:42.536595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:42.536614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:42.536637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:42.536654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:42.536677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:42.536694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:42.536718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:42.536736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.076 8441.25 IOPS, 32.97 MiB/s [2024-11-17T08:26:30.088Z] 8505.44 IOPS, 33.22 MiB/s [2024-11-17T08:26:30.088Z] 8584.50 IOPS, 33.53 MiB/s [2024-11-17T08:26:30.088Z] 8648.45 IOPS, 33.78 MiB/s [2024-11-17T08:26:30.088Z] 8706.42 IOPS, 34.01 MiB/s [2024-11-17T08:26:30.088Z] 8755.46 IOPS, 34.20 MiB/s [2024-11-17T08:26:30.088Z] 8797.50 IOPS, 34.37 MiB/s [2024-11-17T08:26:30.088Z] [2024-11-17 08:25:48.986251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.986966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.986988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.987012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.987035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.076 [2024-11-17 08:25:48.987052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.987079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.076 [2024-11-17 08:25:48.987096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.987119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.076 [2024-11-17 08:25:48.987136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.987159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.076 [2024-11-17 08:25:48.987175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.987198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.076 [2024-11-17 08:25:48.987214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.987238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.076 [2024-11-17 08:25:48.987254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.987277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.076 [2024-11-17 08:25:48.987294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.987317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.076 [2024-11-17 08:25:48.987335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.987362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.987380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.987403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.987419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.987454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.987472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.076 [2024-11-17 08:25:48.987496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.076 [2024-11-17 08:25:48.987519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.987543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.077 [2024-11-17 08:25:48.987560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.987583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.077 [2024-11-17 08:25:48.987599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.987624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.077 [2024-11-17 08:25:48.987641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.987664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.077 [2024-11-17 08:25:48.987681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.987704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.987720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.987745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.987762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.987786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.987802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.987825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.987842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.987865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.987882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.987905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.987921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.987944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.987962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.987985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.077 [2024-11-17 08:25:48.988646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.077 [2024-11-17 08:25:48.988669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.988686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.988726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.988745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.988768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.988784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.988807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.988824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.988848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.988865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.988888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.988905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.988928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.988944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.988967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.988984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.989023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.989699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.989738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.989784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.989823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.989862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.989901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.989941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.989965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.989982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.990005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.078 [2024-11-17 08:25:48.990021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.990044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.990061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.078 [2024-11-17 08:25:48.990089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.078 [2024-11-17 08:25:48.990106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.990146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.990185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.990224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.990271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.990310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.990351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.990395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.990437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.990495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.990535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.990574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.990620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.990660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.990700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.990740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.990779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.990820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.990874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.990913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.990956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.990997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.991020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.079 [2024-11-17 08:25:48.992282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.992337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.992391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.992431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.992485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.992524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.992564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.992603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.992656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.992696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.992734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.079 [2024-11-17 08:25:48.992774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.079 [2024-11-17 08:25:48.992797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.992814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.992836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.992852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.992875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.992900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.992923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.992940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.992966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.992984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.993967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.993998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.080 [2024-11-17 08:25:48.994047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.080 [2024-11-17 08:25:48.994089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.080 [2024-11-17 08:25:48.994130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.080 [2024-11-17 08:25:48.994170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.080 [2024-11-17 08:25:48.994211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.080 [2024-11-17 08:25:48.994253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.080 [2024-11-17 08:25:48.994304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.080 [2024-11-17 08:25:48.994343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.994383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.994423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.994488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.994545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.994586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.080 [2024-11-17 08:25:48.994609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.080 [2024-11-17 08:25:48.994627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.994652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.081 [2024-11-17 08:25:48.994669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.994926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.081 [2024-11-17 08:25:48.994953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.994981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.994999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:48.995622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:48.995650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:49.009098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:49.009181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:49.009211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:49.009245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:49.009268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:49.009301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:49.009324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:49.009355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:49.009378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:49.009409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:49.009433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:49.009482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:49.009505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:49.009536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:49.009559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:49.009590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.081 [2024-11-17 08:25:49.009613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:49.009645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.081 [2024-11-17 08:25:49.009668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:49.009700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.081 [2024-11-17 08:25:49.009722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.081 [2024-11-17 08:25:49.009754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.081 [2024-11-17 08:25:49.009791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.009823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.082 [2024-11-17 08:25:49.009847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.009878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.082 [2024-11-17 08:25:49.009901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.009932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.082 [2024-11-17 08:25:49.009954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.009987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.082 [2024-11-17 08:25:49.010010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.082 [2024-11-17 08:25:49.010075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.010949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.082 [2024-11-17 08:25:49.010972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.011004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.082 [2024-11-17 08:25:49.011026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.011057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.082 [2024-11-17 08:25:49.011079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.011111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.082 [2024-11-17 08:25:49.011134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.011173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.082 [2024-11-17 08:25:49.011196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.082 [2024-11-17 08:25:49.011227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.082 [2024-11-17 08:25:49.011250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.011305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.011359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.011413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.011480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.011534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.011588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.011642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.011697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.011751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.011805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.011866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.011920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.011952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.011974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.012028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.012082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.012136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.012189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.012243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.012295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.012348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.012402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.012468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.012550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.012605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.012658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.012712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.083 [2024-11-17 08:25:49.012765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.012818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.012872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.012926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.012956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.012979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.013010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.013068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.013099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.013122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.013153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.013175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.083 [2024-11-17 08:25:49.013206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.083 [2024-11-17 08:25:49.013235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.013953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.013985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.014007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.014061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.014115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.014168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.014221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.014276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.014330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.014383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.014436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.014513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.014568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-11-17 08:25:49.014630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-11-17 08:25:49.014684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-11-17 08:25:49.014737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-11-17 08:25:49.014790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-11-17 08:25:49.014843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-11-17 08:25:49.014897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-11-17 08:25:49.014951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.014982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.084 [2024-11-17 08:25:49.015005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.015036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.084 [2024-11-17 08:25:49.015059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.084 [2024-11-17 08:25:49.015090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.085 [2024-11-17 08:25:49.015112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.015143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.085 [2024-11-17 08:25:49.015166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.015197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.085 [2024-11-17 08:25:49.015220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.015257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.085 [2024-11-17 08:25:49.015280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.015311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.085 [2024-11-17 08:25:49.015334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.017674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.085 [2024-11-17 08:25:49.017716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.017756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.085 [2024-11-17 08:25:49.017778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.017808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.017830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.017859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.017880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.017910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.017931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.017960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.017981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.018954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.018983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.019004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.019034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.085 [2024-11-17 08:25:49.019054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.019084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.085 [2024-11-17 08:25:49.019104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.019134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.085 [2024-11-17 08:25:49.019155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.019184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.085 [2024-11-17 08:25:49.019205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.019234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.085 [2024-11-17 08:25:49.019254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.085 [2024-11-17 08:25:49.019284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.085 [2024-11-17 08:25:49.019305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.086 [2024-11-17 08:25:49.019362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.086 [2024-11-17 08:25:49.019413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.086 [2024-11-17 08:25:49.019476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.019526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.019585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.019634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.019686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.019735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.019786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.019837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.019886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.019937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.019966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.019987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.020036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.020086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.020135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.020192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.020242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.020291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.086 [2024-11-17 08:25:49.020342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.086 [2024-11-17 08:25:49.020391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.086 [2024-11-17 08:25:49.020452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.086 [2024-11-17 08:25:49.020504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.086 [2024-11-17 08:25:49.020554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.086 [2024-11-17 08:25:49.020605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.086 [2024-11-17 08:25:49.020656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.086 [2024-11-17 08:25:49.020706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.020755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.020812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.020861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.020911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.020960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.020989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.021010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.086 [2024-11-17 08:25:49.021039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.086 [2024-11-17 08:25:49.021060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-11-17 08:25:49.021111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.021161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.021210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.021260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.021310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.021360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.021453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.021507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.021557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-11-17 08:25:49.021608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-11-17 08:25:49.021657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-11-17 08:25:49.021707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-11-17 08:25:49.021776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-11-17 08:25:49.021827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-11-17 08:25:49.021879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-11-17 08:25:49.021929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.021959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.087 [2024-11-17 08:25:49.021980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.022010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.022031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.022060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.022089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.022119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.022140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.022168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.022189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.022218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.022239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.022268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.022289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.022318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.022339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.022368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.022389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.022418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.022450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.022488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.087 [2024-11-17 08:25:49.022511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.087 [2024-11-17 08:25:49.022540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.022561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.022591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.022611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.022641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.022662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.022691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.022713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.022749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.022770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.022799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.022820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.022849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.022870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.022899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.022920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.022949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.022970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.022999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.023644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-11-17 08:25:49.023694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-11-17 08:25:49.023745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-11-17 08:25:49.023795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-11-17 08:25:49.023845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-11-17 08:25:49.023896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-11-17 08:25:49.023947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.023976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-11-17 08:25:49.024004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.024033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.088 [2024-11-17 08:25:49.024054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.024083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.024104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.024134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.024154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.024184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.024205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.024235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.024256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.088 [2024-11-17 08:25:49.024286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.088 [2024-11-17 08:25:49.024307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.026506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.089 [2024-11-17 08:25:49.026560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.026626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.089 [2024-11-17 08:25:49.026665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.026696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.089 [2024-11-17 08:25:49.026718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.026748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.026769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.026798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.026819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.026849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.026882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.026911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.026932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.026963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.026984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.089 [2024-11-17 08:25:49.027886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.089 [2024-11-17 08:25:49.027926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.089 [2024-11-17 08:25:49.027966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.027990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.089 [2024-11-17 08:25:49.028007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.028037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.089 [2024-11-17 08:25:49.028055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.089 [2024-11-17 08:25:49.028084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.089 [2024-11-17 08:25:49.028102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.090 [2024-11-17 08:25:49.028142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.090 [2024-11-17 08:25:49.028183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.090 [2024-11-17 08:25:49.028224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.028894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.090 [2024-11-17 08:25:49.028934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.090 [2024-11-17 08:25:49.028973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.028997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.090 [2024-11-17 08:25:49.029013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.090 [2024-11-17 08:25:49.029058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.090 [2024-11-17 08:25:49.029098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.090 [2024-11-17 08:25:49.029138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.090 [2024-11-17 08:25:49.029178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.090 [2024-11-17 08:25:49.029218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.029258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.029297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.029337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.029375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.029414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.029462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.029504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.090 [2024-11-17 08:25:49.029527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.090 [2024-11-17 08:25:49.029548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.029572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.029589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.029612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.029628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.029652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.029668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.029692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.029708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.029735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.029752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.029775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.029792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.029815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.029831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.029854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.029870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.029893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-11-17 08:25:49.029910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.029933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-11-17 08:25:49.029949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.029973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-11-17 08:25:49.029989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-11-17 08:25:49.030042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-11-17 08:25:49.030088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-11-17 08:25:49.030129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-11-17 08:25:49.030168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.091 [2024-11-17 08:25:49.030207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.030960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.030983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.091 [2024-11-17 08:25:49.031000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.091 [2024-11-17 08:25:49.031024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-11-17 08:25:49.031625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-11-17 08:25:49.031667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-11-17 08:25:49.031707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-11-17 08:25:49.031747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-11-17 08:25:49.031787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-11-17 08:25:49.031828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-11-17 08:25:49.031869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.092 [2024-11-17 08:25:49.031910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.031973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.031991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.032014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.032031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.032055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.032072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:49.032451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:49.032487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.092 8475.00 IOPS, 33.11 MiB/s [2024-11-17T08:26:30.104Z] 8213.12 IOPS, 32.08 MiB/s [2024-11-17T08:26:30.104Z] 8226.94 IOPS, 32.14 MiB/s [2024-11-17T08:26:30.104Z] 8240.67 IOPS, 32.19 MiB/s [2024-11-17T08:26:30.104Z] 8255.95 IOPS, 32.25 MiB/s [2024-11-17T08:26:30.104Z] 8268.20 IOPS, 32.30 MiB/s [2024-11-17T08:26:30.104Z] 8276.38 IOPS, 32.33 MiB/s [2024-11-17T08:26:30.104Z] [2024-11-17 08:25:55.858389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:55.858504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.092 [2024-11-17 08:25:55.858596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.092 [2024-11-17 08:25:55.858618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.858644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.858662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.858686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.858703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.858727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.858744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.858768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.858785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.858808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.858825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.858849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.858865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.858894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.858912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.858935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.858953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.858977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.858994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.859058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.859098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-11-17 08:25:55.859139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-11-17 08:25:55.859196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-11-17 08:25:55.859236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-11-17 08:25:55.859277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-11-17 08:25:55.859318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-11-17 08:25:55.859358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-11-17 08:25:55.859398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.093 [2024-11-17 08:25:55.859438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.859492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.859532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.859581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.859921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.859968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.859993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.860010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.860034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.860051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.860075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.860092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.860116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.860133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.860157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.860174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.860199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.860216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.860242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.860259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.860284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.860300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.860325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.860341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:25.093 [2024-11-17 08:25:55.860365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.093 [2024-11-17 08:25:55.860392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.860433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.860505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.860548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.860590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.860652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-11-17 08:25:55.860695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-11-17 08:25:55.860736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-11-17 08:25:55.860779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-11-17 08:25:55.860820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-11-17 08:25:55.860861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-11-17 08:25:55.860903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-11-17 08:25:55.860947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.860980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-11-17 08:25:55.860997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.094 [2024-11-17 08:25:55.861799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-11-17 08:25:55.861841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-11-17 08:25:55.861883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.094 [2024-11-17 08:25:55.861926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:25.094 [2024-11-17 08:25:55.861950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.861967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.861992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.862009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.862057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.862099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.862141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.862973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.862990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.863035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.863079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.863124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.863176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.095 [2024-11-17 08:25:55.863221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.863266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.863312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.863362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.863407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.863465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.863510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.863555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.863599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.863643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.863688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.095 [2024-11-17 08:25:55.863716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.095 [2024-11-17 08:25:55.863733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.863760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.863778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.863805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.863822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.863850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.863867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.863894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.863912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.863947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.863965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.863992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.864010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.864059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.864104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.864149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.864194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.864237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.864282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.096 [2024-11-17 08:25:55.864326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:25:55.864371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:25:55.864416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:25:55.864510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:25:55.864565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:25:55.864611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:25:55.864656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:25:55.864702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:25:55.864746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:25:55.864791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:25:55.864822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:25:55.864839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.096 8027.45 IOPS, 31.36 MiB/s [2024-11-17T08:26:30.108Z] 7678.43 IOPS, 29.99 MiB/s [2024-11-17T08:26:30.108Z] 7358.50 IOPS, 28.74 MiB/s [2024-11-17T08:26:30.108Z] 7064.16 IOPS, 27.59 MiB/s [2024-11-17T08:26:30.108Z] 6792.46 IOPS, 26.53 MiB/s [2024-11-17T08:26:30.108Z] 6540.89 IOPS, 25.55 MiB/s [2024-11-17T08:26:30.108Z] 6307.29 IOPS, 24.64 MiB/s [2024-11-17T08:26:30.108Z] 6276.52 IOPS, 24.52 MiB/s [2024-11-17T08:26:30.108Z] 6344.90 IOPS, 24.78 MiB/s [2024-11-17T08:26:30.108Z] 6410.68 IOPS, 25.04 MiB/s [2024-11-17T08:26:30.108Z] 6472.34 IOPS, 25.28 MiB/s [2024-11-17T08:26:30.108Z] 6529.42 IOPS, 25.51 MiB/s [2024-11-17T08:26:30.108Z] 6582.44 IOPS, 25.71 MiB/s [2024-11-17T08:26:30.108Z] [2024-11-17 08:26:08.936681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:26:08.936743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:26:08.936813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:26:08.936833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:26:08.936858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:26:08.936875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:26:08.936899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:26:08.936917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:26:08.936964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:26:08.936981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:26:08.937004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.096 [2024-11-17 08:26:08.937021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.096 [2024-11-17 08:26:08.937043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.937060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.937101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.937364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.937407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.937452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.937486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.937537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.937571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.937622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.937657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-11-17 08:26:08.937703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-11-17 08:26:08.937741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-11-17 08:26:08.937776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-11-17 08:26:08.937811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-11-17 08:26:08.937846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-11-17 08:26:08.937880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-11-17 08:26:08.937914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.097 [2024-11-17 08:26:08.937948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.937966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.937983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.938001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.938018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.938036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.938052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.938070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.938086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.938105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.938121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.938147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.938164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.938182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.938199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.938217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.938234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.938251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.938267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.938286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.938303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.938321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.938337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.938355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.938372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.097 [2024-11-17 08:26:08.938389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.097 [2024-11-17 08:26:08.938406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.938440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.938494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.938528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.938563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.938602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.938637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.938671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.938705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.938739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.938773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.938808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.938842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.938880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.938915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.938950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.938967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.938984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.939018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.939057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.939091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.939125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.939159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.939193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.098 [2024-11-17 08:26:08.939227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.939261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.939294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.939328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.939362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.939395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.939430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.939478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.939513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.939557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.939590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.939623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-17 08:26:08.939640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.098 [2024-11-17 08:26:08.939656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.939673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.939689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.939707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.939723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.939740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.939756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.939773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.939789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.939806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.939822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.939839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.939854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.939884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.939900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.939918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.939939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.939957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.939973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.939991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.099 [2024-11-17 08:26:08.940490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.099 [2024-11-17 08:26:08.940523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.099 [2024-11-17 08:26:08.940556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.099 [2024-11-17 08:26:08.940589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.099 [2024-11-17 08:26:08.940621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.099 [2024-11-17 08:26:08.940653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.099 [2024-11-17 08:26:08.940686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.099 [2024-11-17 08:26:08.940718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.099 [2024-11-17 08:26:08.940752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.099 [2024-11-17 08:26:08.940789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.099 [2024-11-17 08:26:08.940806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.099 [2024-11-17 08:26:08.940821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.940838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.940854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.940870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.940886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.940903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.940918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.940935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.940950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.940967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.940983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.941017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-11-17 08:26:08.941048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-11-17 08:26:08.941081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-11-17 08:26:08.941113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-11-17 08:26:08.941145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-11-17 08:26:08.941177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-11-17 08:26:08.941214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-11-17 08:26:08.941247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-11-17 08:26:08.941279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.941310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.941343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.941376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.941408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.941456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.941490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.941523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.941542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.100 [2024-11-17 08:26:08.941558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.942137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.100 [2024-11-17 08:26:08.942176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.942196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.100 [2024-11-17 08:26:08.942221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.942239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.100 [2024-11-17 08:26:08.942254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.942271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.100 [2024-11-17 08:26:08.942287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.942305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.100 [2024-11-17 08:26:08.942322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.100 [2024-11-17 08:26:08.942347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:25.100 [2024-11-17 08:26:08.943372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.100 [2024-11-17 08:26:08.943433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:26:25.100 [2024-11-17 08:26:08.943819] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.100 [2024-11-17 08:26:08.943852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b500 with addr=10.0.0.3, port=4421 00:26:25.100 [2024-11-17 08:26:08.943872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:25.100 [2024-11-17 08:26:08.943913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:26:25.100 [2024-11-17 08:26:08.943943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.100 [2024-11-17 08:26:08.943961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.100 [2024-11-17 08:26:08.943981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.100 [2024-11-17 08:26:08.944000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.100 [2024-11-17 08:26:08.944017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.100 6638.91 IOPS, 25.93 MiB/s [2024-11-17T08:26:30.112Z] 6699.83 IOPS, 26.17 MiB/s [2024-11-17T08:26:30.112Z] 6755.51 IOPS, 26.39 MiB/s [2024-11-17T08:26:30.112Z] 6812.05 IOPS, 26.61 MiB/s [2024-11-17T08:26:30.112Z] 6866.72 IOPS, 26.82 MiB/s [2024-11-17T08:26:30.112Z] 6918.45 IOPS, 27.03 MiB/s [2024-11-17T08:26:30.112Z] 6967.46 IOPS, 27.22 MiB/s [2024-11-17T08:26:30.112Z] 7013.19 IOPS, 27.40 MiB/s [2024-11-17T08:26:30.112Z] 7056.79 IOPS, 27.57 MiB/s [2024-11-17T08:26:30.112Z] 7100.77 IOPS, 27.74 MiB/s [2024-11-17T08:26:30.112Z] [2024-11-17 08:26:18.975247] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:25.100 7136.38 IOPS, 27.88 MiB/s [2024-11-17T08:26:30.112Z] 7168.02 IOPS, 28.00 MiB/s [2024-11-17T08:26:30.112Z] 7199.85 IOPS, 28.12 MiB/s [2024-11-17T08:26:30.112Z] 7230.02 IOPS, 28.24 MiB/s [2024-11-17T08:26:30.112Z] 7252.27 IOPS, 28.33 MiB/s [2024-11-17T08:26:30.112Z] 7279.38 IOPS, 28.44 MiB/s [2024-11-17T08:26:30.112Z] 7305.27 IOPS, 28.54 MiB/s [2024-11-17T08:26:30.113Z] 7329.71 IOPS, 28.63 MiB/s [2024-11-17T08:26:30.113Z] 7353.38 IOPS, 28.72 MiB/s [2024-11-17T08:26:30.113Z] 7376.31 IOPS, 28.81 MiB/s [2024-11-17T08:26:30.113Z] Received shutdown signal, test time was about 54.451431 seconds 00:26:25.101 00:26:25.101 Latency(us) 00:26:25.101 [2024-11-17T08:26:30.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.101 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:25.101 Verification LBA range: start 0x0 length 0x4000 00:26:25.101 Nvme0n1 : 54.45 7385.78 28.85 0.00 0.00 17314.34 1171.23 7061253.96 00:26:25.101 [2024-11-17T08:26:30.113Z] =================================================================================================================== 00:26:25.101 [2024-11-17T08:26:30.113Z] Total : 7385.78 28.85 0.00 0.00 17314.34 1171.23 7061253.96 00:26:25.101 08:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:25.360 rmmod nvme_tcp 00:26:25.360 rmmod nvme_fabrics 00:26:25.360 rmmod nvme_keyring 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 87471 ']' 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 87471 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 87471 ']' 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 87471 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87471 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:25.360 killing process with pid 87471 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87471' 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 87471 00:26:25.360 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 87471 00:26:26.740 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:26.740 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:26.740 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:26.740 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:26:26.740 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:26.740 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:26:26.740 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:26:26.740 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:26.740 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:26.740 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:27.000 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:27.000 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:27.000 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:27.000 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:27.000 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:27.000 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:27.000 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:27.000 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:27.000 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:27.000 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:27.000 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:27.000 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:27.259 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:27.260 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.260 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.260 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.260 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:26:27.260 00:26:27.260 real 1m2.495s 00:26:27.260 user 2m47.290s 00:26:27.260 sys 0m21.402s 00:26:27.260 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.260 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:27.260 ************************************ 00:26:27.260 END TEST nvmf_host_multipath 00:26:27.260 ************************************ 00:26:27.260 08:26:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:27.260 08:26:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:27.260 08:26:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.260 08:26:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.260 ************************************ 00:26:27.260 START TEST nvmf_timeout 00:26:27.260 ************************************ 00:26:27.260 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:27.520 * Looking for test storage... 00:26:27.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.520 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:27.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.521 --rc genhtml_branch_coverage=1 00:26:27.521 --rc genhtml_function_coverage=1 00:26:27.521 --rc genhtml_legend=1 00:26:27.521 --rc geninfo_all_blocks=1 00:26:27.521 --rc geninfo_unexecuted_blocks=1 00:26:27.521 00:26:27.521 ' 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:27.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.521 --rc genhtml_branch_coverage=1 00:26:27.521 --rc genhtml_function_coverage=1 00:26:27.521 --rc genhtml_legend=1 00:26:27.521 --rc geninfo_all_blocks=1 00:26:27.521 --rc geninfo_unexecuted_blocks=1 00:26:27.521 00:26:27.521 ' 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:27.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.521 --rc genhtml_branch_coverage=1 00:26:27.521 --rc genhtml_function_coverage=1 00:26:27.521 --rc genhtml_legend=1 00:26:27.521 --rc geninfo_all_blocks=1 00:26:27.521 --rc geninfo_unexecuted_blocks=1 00:26:27.521 00:26:27.521 ' 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:27.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.521 --rc genhtml_branch_coverage=1 00:26:27.521 --rc genhtml_function_coverage=1 00:26:27.521 --rc genhtml_legend=1 00:26:27.521 --rc geninfo_all_blocks=1 00:26:27.521 --rc geninfo_unexecuted_blocks=1 00:26:27.521 00:26:27.521 ' 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:27.521 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:27.521 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:27.522 Cannot find device "nvmf_init_br" 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:27.522 Cannot find device "nvmf_init_br2" 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:27.522 Cannot find device "nvmf_tgt_br" 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:27.522 Cannot find device "nvmf_tgt_br2" 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:26:27.522 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:27.781 Cannot find device "nvmf_init_br" 00:26:27.781 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:26:27.781 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:27.781 Cannot find device "nvmf_init_br2" 00:26:27.781 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:26:27.781 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:27.781 Cannot find device "nvmf_tgt_br" 00:26:27.781 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:26:27.781 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:27.781 Cannot find device "nvmf_tgt_br2" 00:26:27.781 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:26:27.781 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:27.781 Cannot find device "nvmf_br" 00:26:27.781 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:27.782 Cannot find device "nvmf_init_if" 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:27.782 Cannot find device "nvmf_init_if2" 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:27.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:27.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:27.782 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:28.041 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:28.041 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:26:28.041 00:26:28.041 --- 10.0.0.3 ping statistics --- 00:26:28.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.041 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:26:28.041 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:28.041 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:28.041 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:26:28.041 00:26:28.041 --- 10.0.0.4 ping statistics --- 00:26:28.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.041 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:26:28.042 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:28.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:26:28.042 00:26:28.042 --- 10.0.0.1 ping statistics --- 00:26:28.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.042 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:26:28.042 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:28.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:26:28.042 00:26:28.042 --- 10.0.0.2 ping statistics --- 00:26:28.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.042 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:26:28.042 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.042 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:26:28.042 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:28.042 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.042 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:28.042 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:28.042 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.042 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:28.042 08:26:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=88700 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 88700 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88700 ']' 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.042 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.301 [2024-11-17 08:26:33.153489] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:28.301 [2024-11-17 08:26:33.153625] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.561 [2024-11-17 08:26:33.342788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:28.561 [2024-11-17 08:26:33.451279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.561 [2024-11-17 08:26:33.451333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.561 [2024-11-17 08:26:33.451348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.561 [2024-11-17 08:26:33.451370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.561 [2024-11-17 08:26:33.451382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.561 [2024-11-17 08:26:33.453290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.561 [2024-11-17 08:26:33.453339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.820 [2024-11-17 08:26:33.655947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:29.079 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.079 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:29.079 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:29.079 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:29.079 08:26:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.079 08:26:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.079 08:26:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:29.079 08:26:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:29.338 [2024-11-17 08:26:34.207675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.338 08:26:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:29.598 Malloc0 00:26:29.598 08:26:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:29.857 08:26:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:30.117 08:26:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:30.117 [2024-11-17 08:26:35.122934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:30.376 08:26:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:30.376 08:26:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=88745 00:26:30.377 08:26:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 88745 /var/tmp/bdevperf.sock 00:26:30.377 08:26:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88745 ']' 00:26:30.377 08:26:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:30.377 08:26:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:30.377 08:26:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:30.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:30.377 08:26:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:30.377 08:26:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.377 [2024-11-17 08:26:35.222441] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:30.377 [2024-11-17 08:26:35.222610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88745 ] 00:26:30.636 [2024-11-17 08:26:35.407713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.636 [2024-11-17 08:26:35.536954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.896 [2024-11-17 08:26:35.766269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:31.156 08:26:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.156 08:26:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:31.156 08:26:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:31.415 08:26:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:31.674 NVMe0n1 00:26:31.674 08:26:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=88773 00:26:31.674 08:26:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:31.674 08:26:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:26:31.674 Running I/O for 10 seconds... 00:26:32.611 08:26:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:32.874 8400.00 IOPS, 32.81 MiB/s [2024-11-17T08:26:37.886Z] [2024-11-17 08:26:37.750762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:32.874 [2024-11-17 08:26:37.750826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:32.874 [2024-11-17 08:26:37.750840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:32.874 [2024-11-17 08:26:37.750934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.874 [2024-11-17 08:26:37.750980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.874 [2024-11-17 08:26:37.751028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.874 [2024-11-17 08:26:37.751058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.874 [2024-11-17 08:26:37.751086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.874 [2024-11-17 08:26:37.751121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.874 [2024-11-17 08:26:37.751150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.874 [2024-11-17 08:26:37.751181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.874 [2024-11-17 08:26:37.751210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.874 [2024-11-17 08:26:37.751239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.874 [2024-11-17 08:26:37.751267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.874 [2024-11-17 08:26:37.751296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.874 [2024-11-17 08:26:37.751324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.874 [2024-11-17 08:26:37.751355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.874 [2024-11-17 08:26:37.751383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.874 [2024-11-17 08:26:37.751413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.874 [2024-11-17 08:26:37.751441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.874 [2024-11-17 08:26:37.751469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.751486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.751515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.751544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.751572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.751982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.751995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.752010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.752039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.752070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.752099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.752128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.752157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.752185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.752214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.752243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.752271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.752304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.752333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.752376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.752405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.752433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.752473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.752502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.875 [2024-11-17 08:26:37.752534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.752565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.752593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.752622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.875 [2024-11-17 08:26:37.752635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.875 [2024-11-17 08:26:37.752650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.752663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.752677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.752690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.752705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.752718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.752732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.752745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.752762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.752775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.752793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.752806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.752820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.752835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.752849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.752862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.752877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.752891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.752905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.752918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.752933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.752946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.752961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.752974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.752989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.876 [2024-11-17 08:26:37.753259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.876 [2024-11-17 08:26:37.753287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.876 [2024-11-17 08:26:37.753317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.876 [2024-11-17 08:26:37.753345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.876 [2024-11-17 08:26:37.753373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.876 [2024-11-17 08:26:37.753401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.876 [2024-11-17 08:26:37.753430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.876 [2024-11-17 08:26:37.753468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.876 [2024-11-17 08:26:37.753786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.876 [2024-11-17 08:26:37.753799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.753813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.753826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.753841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.753854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.753868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.753881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.753895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.753908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.753925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.753938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.877 [2024-11-17 08:26:37.753956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.753973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.877 [2024-11-17 08:26:37.753988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.877 [2024-11-17 08:26:37.754016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.877 [2024-11-17 08:26:37.754044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.877 [2024-11-17 08:26:37.754072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.877 [2024-11-17 08:26:37.754100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.877 [2024-11-17 08:26:37.754127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.877 [2024-11-17 08:26:37.754165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.877 [2024-11-17 08:26:37.754741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.754754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:26:32.877 [2024-11-17 08:26:37.754774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:32.877 [2024-11-17 08:26:37.754786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:32.877 [2024-11-17 08:26:37.754801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77080 len:8 PRP1 0x0 PRP2 0x0 00:26:32.877 [2024-11-17 08:26:37.754814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.755215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.877 [2024-11-17 08:26:37.755247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.755263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.877 [2024-11-17 08:26:37.755278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.755291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.877 [2024-11-17 08:26:37.755310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.755323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.877 [2024-11-17 08:26:37.755340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.877 [2024-11-17 08:26:37.755352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:32.877 [2024-11-17 08:26:37.755575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:32.877 [2024-11-17 08:26:37.755621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:32.877 [2024-11-17 08:26:37.755761] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.878 [2024-11-17 08:26:37.755798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:32.878 [2024-11-17 08:26:37.755813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:32.878 [2024-11-17 08:26:37.755839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:32.878 [2024-11-17 08:26:37.755858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:32.878 [2024-11-17 08:26:37.755873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:32.878 [2024-11-17 08:26:37.755888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:32.878 [2024-11-17 08:26:37.755907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:32.878 [2024-11-17 08:26:37.755922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:32.878 08:26:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:34.753 4754.00 IOPS, 18.57 MiB/s [2024-11-17T08:26:39.765Z] 3169.33 IOPS, 12.38 MiB/s [2024-11-17T08:26:39.765Z] [2024-11-17 08:26:39.752824] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.753 [2024-11-17 08:26:39.752882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:34.753 [2024-11-17 08:26:39.752916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:34.753 [2024-11-17 08:26:39.752948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:34.753 [2024-11-17 08:26:39.752971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:34.753 [2024-11-17 08:26:39.752987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:34.753 [2024-11-17 08:26:39.753002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:34.753 [2024-11-17 08:26:39.753021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:34.753 [2024-11-17 08:26:39.753036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:35.013 08:26:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:35.013 08:26:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:35.013 08:26:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:35.013 08:26:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:35.013 08:26:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:35.013 08:26:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:35.013 08:26:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:35.272 08:26:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:35.272 08:26:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:26:36.780 2377.00 IOPS, 9.29 MiB/s [2024-11-17T08:26:41.792Z] 1901.60 IOPS, 7.43 MiB/s [2024-11-17T08:26:41.792Z] [2024-11-17 08:26:41.749977] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.780 [2024-11-17 08:26:41.750041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:36.780 [2024-11-17 08:26:41.750077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:36.780 [2024-11-17 08:26:41.750115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:36.780 [2024-11-17 08:26:41.750141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:36.780 [2024-11-17 08:26:41.750162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:36.780 [2024-11-17 08:26:41.750178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:36.780 [2024-11-17 08:26:41.750198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:36.780 [2024-11-17 08:26:41.750214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:38.660 1584.67 IOPS, 6.19 MiB/s [2024-11-17T08:26:43.931Z] 1358.29 IOPS, 5.31 MiB/s [2024-11-17T08:26:43.931Z] [2024-11-17 08:26:43.747050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:38.919 [2024-11-17 08:26:43.747105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:38.919 [2024-11-17 08:26:43.747123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:38.919 [2024-11-17 08:26:43.747138] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:26:38.919 [2024-11-17 08:26:43.747160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:39.858 1188.50 IOPS, 4.64 MiB/s 00:26:39.858 Latency(us) 00:26:39.858 [2024-11-17T08:26:44.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.858 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:39.858 Verification LBA range: start 0x0 length 0x4000 00:26:39.858 NVMe0n1 : 8.11 1172.41 4.58 15.78 0.00 107896.78 3211.00 7061253.96 00:26:39.858 [2024-11-17T08:26:44.870Z] =================================================================================================================== 00:26:39.858 [2024-11-17T08:26:44.870Z] Total : 1172.41 4.58 15.78 0.00 107896.78 3211.00 7061253.96 00:26:39.858 { 00:26:39.858 "results": [ 00:26:39.858 { 00:26:39.858 "job": "NVMe0n1", 00:26:39.858 "core_mask": "0x4", 00:26:39.858 "workload": "verify", 00:26:39.858 "status": "finished", 00:26:39.858 "verify_range": { 00:26:39.858 "start": 0, 00:26:39.858 "length": 16384 00:26:39.858 }, 00:26:39.858 "queue_depth": 128, 00:26:39.858 "io_size": 4096, 00:26:39.858 "runtime": 8.109766, 00:26:39.858 "iops": 1172.413606015266, 00:26:39.858 "mibps": 4.579740648497133, 00:26:39.858 "io_failed": 128, 00:26:39.858 "io_timeout": 0, 00:26:39.858 "avg_latency_us": 107896.78268474479, 00:26:39.858 "min_latency_us": 3211.000803212851, 00:26:39.858 "max_latency_us": 7061253.963052209 00:26:39.858 } 00:26:39.858 ], 00:26:39.858 "core_count": 1 00:26:39.858 } 00:26:40.427 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:40.427 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:40.427 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:40.427 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:40.687 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:40.687 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:40.687 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 88773 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 88745 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88745 ']' 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88745 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88745 00:26:40.688 killing process with pid 88745 00:26:40.688 Received shutdown signal, test time was about 9.061324 seconds 00:26:40.688 00:26:40.688 Latency(us) 00:26:40.688 [2024-11-17T08:26:45.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.688 [2024-11-17T08:26:45.700Z] =================================================================================================================== 00:26:40.688 [2024-11-17T08:26:45.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88745' 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88745 00:26:40.688 08:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88745 00:26:42.071 08:26:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:42.071 [2024-11-17 08:26:47.038960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:42.071 08:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=88898 00:26:42.071 08:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:42.071 08:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 88898 /var/tmp/bdevperf.sock 00:26:42.071 08:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88898 ']' 00:26:42.071 08:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:42.071 08:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.071 08:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:42.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:42.071 08:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.071 08:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.331 [2024-11-17 08:26:47.172127] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:42.331 [2024-11-17 08:26:47.172256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88898 ] 00:26:42.591 [2024-11-17 08:26:47.360040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.591 [2024-11-17 08:26:47.488779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.851 [2024-11-17 08:26:47.720199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:43.111 08:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.111 08:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:43.111 08:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:43.371 08:26:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:43.632 NVMe0n1 00:26:43.632 08:26:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=88916 00:26:43.632 08:26:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:43.632 08:26:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:43.632 Running I/O for 10 seconds... 00:26:44.574 08:26:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:44.838 9138.00 IOPS, 35.70 MiB/s [2024-11-17T08:26:49.850Z] [2024-11-17 08:26:49.675029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.838 [2024-11-17 08:26:49.675259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.838 [2024-11-17 08:26:49.675461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.838 [2024-11-17 08:26:49.675578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.838 [2024-11-17 08:26:49.675648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.838 [2024-11-17 08:26:49.675745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.838 [2024-11-17 08:26:49.675813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.838 [2024-11-17 08:26:49.676056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.838 [2024-11-17 08:26:49.676077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.838 [2024-11-17 08:26:49.676090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.838 [2024-11-17 08:26:49.676107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.838 [2024-11-17 08:26:49.676120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.838 [2024-11-17 08:26:49.676136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.838 [2024-11-17 08:26:49.676148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.838 [2024-11-17 08:26:49.676165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.838 [2024-11-17 08:26:49.676177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.838 [2024-11-17 08:26:49.676194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.838 [2024-11-17 08:26:49.676206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.839 [2024-11-17 08:26:49.676906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.839 [2024-11-17 08:26:49.676934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.839 [2024-11-17 08:26:49.676963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.676979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.839 [2024-11-17 08:26:49.676991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.677010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.839 [2024-11-17 08:26:49.677021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.839 [2024-11-17 08:26:49.677038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.839 [2024-11-17 08:26:49.677050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.840 [2024-11-17 08:26:49.677853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.840 [2024-11-17 08:26:49.677896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.840 [2024-11-17 08:26:49.677908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.677924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.677935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.677954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.677965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.677983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.677995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.841 [2024-11-17 08:26:49.678110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.841 [2024-11-17 08:26:49.678138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.841 [2024-11-17 08:26:49.678166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.841 [2024-11-17 08:26:49.678197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.841 [2024-11-17 08:26:49.678224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.841 [2024-11-17 08:26:49.678252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.841 [2024-11-17 08:26:49.678281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.841 [2024-11-17 08:26:49.678309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.841 [2024-11-17 08:26:49.678700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.841 [2024-11-17 08:26:49.678716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.678728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.678744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.678757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.678773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.678785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.678803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.678816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.678832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.678844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.678861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.678873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.678892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.678904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.678920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.678931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.678947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.678959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.678975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.678986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.679015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.842 [2024-11-17 08:26:49.679043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.842 [2024-11-17 08:26:49.679071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.842 [2024-11-17 08:26:49.679099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.842 [2024-11-17 08:26:49.679131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.842 [2024-11-17 08:26:49.679159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.842 [2024-11-17 08:26:49.679188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.842 [2024-11-17 08:26:49.679227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.842 [2024-11-17 08:26:49.679255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.679282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.679310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.679338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.679369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.679397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.679425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.679471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.842 [2024-11-17 08:26:49.679488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.842 [2024-11-17 08:26:49.679500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.843 [2024-11-17 08:26:49.679516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.843 [2024-11-17 08:26:49.679527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.843 [2024-11-17 08:26:49.679543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.843 [2024-11-17 08:26:49.679555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.843 [2024-11-17 08:26:49.679573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.843 [2024-11-17 08:26:49.679585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.843 [2024-11-17 08:26:49.679605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.843 [2024-11-17 08:26:49.679617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.843 [2024-11-17 08:26:49.679634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.843 [2024-11-17 08:26:49.679645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.843 [2024-11-17 08:26:49.679662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.843 [2024-11-17 08:26:49.679674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.843 [2024-11-17 08:26:49.679690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.843 [2024-11-17 08:26:49.679702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.843 [2024-11-17 08:26:49.679718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:44.843 [2024-11-17 08:26:49.679736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:44.843 [2024-11-17 08:26:49.679750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:44.843 [2024-11-17 08:26:49.679762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83064 len:8 PRP1 0x0 PRP2 0x0 00:26:44.843 [2024-11-17 08:26:49.679779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.843 [2024-11-17 08:26:49.680344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.843 [2024-11-17 08:26:49.680473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:44.843 [2024-11-17 08:26:49.680604] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-11-17 08:26:49.680626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:44.843 [2024-11-17 08:26:49.680646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:44.843 [2024-11-17 08:26:49.680671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:44.843 [2024-11-17 08:26:49.680693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:44.843 [2024-11-17 08:26:49.680716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:44.843 [2024-11-17 08:26:49.680736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:44.843 [2024-11-17 08:26:49.680753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:44.843 [2024-11-17 08:26:49.680770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:44.843 08:26:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:45.783 5128.00 IOPS, 20.03 MiB/s [2024-11-17T08:26:50.796Z] [2024-11-17 08:26:50.679321] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.784 [2024-11-17 08:26:50.679387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:45.784 [2024-11-17 08:26:50.679412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:45.784 [2024-11-17 08:26:50.679458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:45.784 [2024-11-17 08:26:50.679504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:45.784 [2024-11-17 08:26:50.679518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:45.784 [2024-11-17 08:26:50.679545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:45.784 [2024-11-17 08:26:50.679563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:45.784 [2024-11-17 08:26:50.679582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:45.784 08:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:46.045 [2024-11-17 08:26:50.889582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:46.045 08:26:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 88916 00:26:46.955 3418.67 IOPS, 13.35 MiB/s [2024-11-17T08:26:51.967Z] [2024-11-17 08:26:51.694233] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:48.986 2564.00 IOPS, 10.02 MiB/s [2024-11-17T08:26:54.936Z] 3989.00 IOPS, 15.58 MiB/s [2024-11-17T08:26:55.875Z] 5158.83 IOPS, 20.15 MiB/s [2024-11-17T08:26:56.813Z] 5991.00 IOPS, 23.40 MiB/s [2024-11-17T08:26:57.752Z] 6626.12 IOPS, 25.88 MiB/s [2024-11-17T08:26:58.690Z] 7115.22 IOPS, 27.79 MiB/s [2024-11-17T08:26:58.690Z] 7507.30 IOPS, 29.33 MiB/s 00:26:53.678 Latency(us) 00:26:53.678 [2024-11-17T08:26:58.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.678 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:53.678 Verification LBA range: start 0x0 length 0x4000 00:26:53.678 NVMe0n1 : 10.01 7511.85 29.34 0.00 0.00 17007.41 1526.54 3018551.31 00:26:53.678 [2024-11-17T08:26:58.690Z] =================================================================================================================== 00:26:53.678 [2024-11-17T08:26:58.690Z] Total : 7511.85 29.34 0.00 0.00 17007.41 1526.54 3018551.31 00:26:53.678 { 00:26:53.678 "results": [ 00:26:53.678 { 00:26:53.678 "job": "NVMe0n1", 00:26:53.678 "core_mask": "0x4", 00:26:53.678 "workload": "verify", 00:26:53.678 "status": "finished", 00:26:53.678 "verify_range": { 00:26:53.678 "start": 0, 00:26:53.678 "length": 16384 00:26:53.678 }, 00:26:53.678 "queue_depth": 128, 00:26:53.678 "io_size": 4096, 00:26:53.678 "runtime": 10.009115, 00:26:53.678 "iops": 7511.852946039685, 00:26:53.678 "mibps": 29.34317557046752, 00:26:53.678 "io_failed": 0, 00:26:53.678 "io_timeout": 0, 00:26:53.678 "avg_latency_us": 17007.41306216794, 00:26:53.678 "min_latency_us": 1526.5413654618474, 00:26:53.678 "max_latency_us": 3018551.3124497994 00:26:53.678 } 00:26:53.678 ], 00:26:53.678 "core_count": 1 00:26:53.678 } 00:26:53.678 08:26:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=89026 00:26:53.678 08:26:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:53.678 08:26:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:53.938 Running I/O for 10 seconds... 00:26:54.878 08:26:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:54.878 8392.00 IOPS, 32.78 MiB/s [2024-11-17T08:26:59.890Z] [2024-11-17 08:26:59.788920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.788977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.878 [2024-11-17 08:26:59.789522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.878 [2024-11-17 08:26:59.789548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.878 [2024-11-17 08:26:59.789574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.878 [2024-11-17 08:26:59.789587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.789946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.789972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.789986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.789997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.790324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.790350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.790375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.790400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.790425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.790451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.790486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.879 [2024-11-17 08:26:59.790519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.879 [2024-11-17 08:26:59.790609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.879 [2024-11-17 08:26:59.790620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.790647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.790672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.790697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.790725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.790750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.790776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.790800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.790825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.790851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.790875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.790901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.790926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.790951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.790976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.790989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.880 [2024-11-17 08:26:59.791488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.791514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.791539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.791565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.791590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.880 [2024-11-17 08:26:59.791616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.880 [2024-11-17 08:26:59.791630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.881 [2024-11-17 08:26:59.791641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.881 [2024-11-17 08:26:59.791667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.881 [2024-11-17 08:26:59.791692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.791718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.791743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.791767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.791793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.791818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.791842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.791867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.791892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.791916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.791940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.791965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.791978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.791990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.792015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.792041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.792065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:54.881 [2024-11-17 08:26:59.792089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.881 [2024-11-17 08:26:59.792114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.881 [2024-11-17 08:26:59.792138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.881 [2024-11-17 08:26:59.792162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.881 [2024-11-17 08:26:59.792187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.881 [2024-11-17 08:26:59.792212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.881 [2024-11-17 08:26:59.792238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:54.881 [2024-11-17 08:26:59.792263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bc80 is same with the state(6) to be set 00:26:54.881 [2024-11-17 08:26:59.792292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:54.881 [2024-11-17 08:26:59.792303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:54.881 [2024-11-17 08:26:59.792315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:26:54.881 [2024-11-17 08:26:59.792327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.881 [2024-11-17 08:26:59.792887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:54.881 [2024-11-17 08:26:59.792985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:54.881 [2024-11-17 08:26:59.793101] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.881 [2024-11-17 08:26:59.793123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:54.881 [2024-11-17 08:26:59.793138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:54.881 [2024-11-17 08:26:59.793158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:54.881 [2024-11-17 08:26:59.793177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:54.881 [2024-11-17 08:26:59.793190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:54.881 [2024-11-17 08:26:59.793205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:54.881 [2024-11-17 08:26:59.793221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:54.881 [2024-11-17 08:26:59.793236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:54.881 08:26:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:26:55.820 4712.00 IOPS, 18.41 MiB/s [2024-11-17T08:27:00.832Z] [2024-11-17 08:27:00.791781] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.820 [2024-11-17 08:27:00.791988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:55.820 [2024-11-17 08:27:00.792167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:55.820 [2024-11-17 08:27:00.792248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:55.820 [2024-11-17 08:27:00.792524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:55.820 [2024-11-17 08:27:00.792579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:55.820 [2024-11-17 08:27:00.792633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:55.820 [2024-11-17 08:27:00.792670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:55.820 [2024-11-17 08:27:00.792864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:57.017 3141.33 IOPS, 12.27 MiB/s [2024-11-17T08:27:02.029Z] [2024-11-17 08:27:01.791424] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.017 [2024-11-17 08:27:01.791596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:57.017 [2024-11-17 08:27:01.791754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:57.017 [2024-11-17 08:27:01.791794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:57.017 [2024-11-17 08:27:01.791832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:57.017 [2024-11-17 08:27:01.791847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:57.017 [2024-11-17 08:27:01.791865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:57.017 [2024-11-17 08:27:01.791882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:57.017 [2024-11-17 08:27:01.791899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:57.954 2356.00 IOPS, 9.20 MiB/s [2024-11-17T08:27:02.966Z] [2024-11-17 08:27:02.791935] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.954 [2024-11-17 08:27:02.791994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:57.954 [2024-11-17 08:27:02.792016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:57.954 [2024-11-17 08:27:02.792254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:57.954 [2024-11-17 08:27:02.792489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:57.954 [2024-11-17 08:27:02.792507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:57.954 [2024-11-17 08:27:02.792523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:57.954 [2024-11-17 08:27:02.792540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:57.954 [2024-11-17 08:27:02.792557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:57.954 08:27:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:58.212 [2024-11-17 08:27:03.002974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:58.212 08:27:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 89026 00:26:59.039 1884.80 IOPS, 7.36 MiB/s [2024-11-17T08:27:04.051Z] [2024-11-17 08:27:03.815942] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:27:00.915 3077.00 IOPS, 12.02 MiB/s [2024-11-17T08:27:06.865Z] 4185.71 IOPS, 16.35 MiB/s [2024-11-17T08:27:07.801Z] 5040.38 IOPS, 19.69 MiB/s [2024-11-17T08:27:08.739Z] 5701.78 IOPS, 22.27 MiB/s [2024-11-17T08:27:08.739Z] 6235.00 IOPS, 24.36 MiB/s 00:27:03.727 Latency(us) 00:27:03.727 [2024-11-17T08:27:08.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.727 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:03.727 Verification LBA range: start 0x0 length 0x4000 00:27:03.727 NVMe0n1 : 10.01 6242.22 24.38 4861.05 0.00 11507.93 572.45 3018551.31 00:27:03.727 [2024-11-17T08:27:08.739Z] =================================================================================================================== 00:27:03.727 [2024-11-17T08:27:08.739Z] Total : 6242.22 24.38 4861.05 0.00 11507.93 0.00 3018551.31 00:27:03.727 { 00:27:03.727 "results": [ 00:27:03.727 { 00:27:03.727 "job": "NVMe0n1", 00:27:03.727 "core_mask": "0x4", 00:27:03.727 "workload": "verify", 00:27:03.727 "status": "finished", 00:27:03.727 "verify_range": { 00:27:03.727 "start": 0, 00:27:03.727 "length": 16384 00:27:03.727 }, 00:27:03.727 "queue_depth": 128, 00:27:03.727 "io_size": 4096, 00:27:03.727 "runtime": 10.008943, 00:27:03.727 "iops": 6242.217584813901, 00:27:03.727 "mibps": 24.383662440679302, 00:27:03.727 "io_failed": 48654, 00:27:03.727 "io_timeout": 0, 00:27:03.727 "avg_latency_us": 11507.930767290447, 00:27:03.727 "min_latency_us": 572.4530120481928, 00:27:03.727 "max_latency_us": 3018551.3124497994 00:27:03.727 } 00:27:03.727 ], 00:27:03.727 "core_count": 1 00:27:03.727 } 00:27:03.727 08:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 88898 00:27:03.727 08:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88898 ']' 00:27:03.727 08:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88898 00:27:03.727 08:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:27:03.727 08:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:03.727 08:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88898 00:27:03.986 08:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:03.986 killing process with pid 88898 00:27:03.986 Received shutdown signal, test time was about 10.000000 seconds 00:27:03.986 00:27:03.986 Latency(us) 00:27:03.986 [2024-11-17T08:27:08.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.986 [2024-11-17T08:27:08.998Z] =================================================================================================================== 00:27:03.986 [2024-11-17T08:27:08.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:03.986 08:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:03.986 08:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88898' 00:27:03.986 08:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88898 00:27:03.986 08:27:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88898 00:27:04.933 08:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=89148 00:27:04.933 08:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:27:04.934 08:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 89148 /var/tmp/bdevperf.sock 00:27:04.934 08:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 89148 ']' 00:27:04.934 08:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:04.934 08:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:04.934 08:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:04.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:04.934 08:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:04.934 08:27:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:04.934 [2024-11-17 08:27:09.921721] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:04.934 [2024-11-17 08:27:09.921853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89148 ] 00:27:05.193 [2024-11-17 08:27:10.108902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.452 [2024-11-17 08:27:10.233381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.711 [2024-11-17 08:27:10.470001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:05.971 08:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.971 08:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:05.971 08:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89148 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:27:05.971 08:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=89164 00:27:05.971 08:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:27:06.236 08:27:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:06.495 NVMe0n1 00:27:06.495 08:27:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=89200 00:27:06.495 08:27:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:27:06.495 08:27:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:06.496 Running I/O for 10 seconds... 00:27:07.434 08:27:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:07.696 17399.00 IOPS, 67.96 MiB/s [2024-11-17T08:27:12.708Z] [2024-11-17 08:27:12.474972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.696 [2024-11-17 08:27:12.475541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:07.697 [2024-11-17 08:27:12.475827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.475870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.475906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.475920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.475937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.475949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.475966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.475978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.697 [2024-11-17 08:27:12.476821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.697 [2024-11-17 08:27:12.476833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.476851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.476863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.476882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.476894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.476911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.476923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.476939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.476951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.476967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.476978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.476994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.698 [2024-11-17 08:27:12.477969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.698 [2024-11-17 08:27:12.477980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.477998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.478977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.478988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.479004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.479016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.479032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.479043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.479059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.479071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.479087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.699 [2024-11-17 08:27:12.479098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.699 [2024-11-17 08:27:12.479118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.700 [2024-11-17 08:27:12.479659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.479675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:27:07.700 [2024-11-17 08:27:12.479693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:07.700 [2024-11-17 08:27:12.479708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:07.700 [2024-11-17 08:27:12.479720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106176 len:8 PRP1 0x0 PRP2 0x0 00:27:07.700 [2024-11-17 08:27:12.479736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.480174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.700 [2024-11-17 08:27:12.480200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.480219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.700 [2024-11-17 08:27:12.480231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.480247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.700 [2024-11-17 08:27:12.480259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.480278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.700 [2024-11-17 08:27:12.480290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.700 [2024-11-17 08:27:12.480307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:07.700 [2024-11-17 08:27:12.480549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:07.700 [2024-11-17 08:27:12.480592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:07.700 [2024-11-17 08:27:12.480741] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-11-17 08:27:12.480765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:27:07.700 [2024-11-17 08:27:12.480783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:07.700 [2024-11-17 08:27:12.480804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:07.700 [2024-11-17 08:27:12.480827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:27:07.700 [2024-11-17 08:27:12.480840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:27:07.700 [2024-11-17 08:27:12.480859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:07.700 [2024-11-17 08:27:12.480873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:27:07.700 [2024-11-17 08:27:12.480892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:07.700 08:27:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 89200 00:27:09.576 9494.50 IOPS, 37.09 MiB/s [2024-11-17T08:27:14.588Z] 6329.67 IOPS, 24.73 MiB/s [2024-11-17T08:27:14.588Z] [2024-11-17 08:27:14.477879] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.576 [2024-11-17 08:27:14.478061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:27:09.576 [2024-11-17 08:27:14.478200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:09.576 [2024-11-17 08:27:14.478327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:09.576 [2024-11-17 08:27:14.478408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:27:09.576 [2024-11-17 08:27:14.478541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:27:09.576 [2024-11-17 08:27:14.478658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:09.576 [2024-11-17 08:27:14.478740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:27:09.577 [2024-11-17 08:27:14.478806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:11.452 4747.25 IOPS, 18.54 MiB/s [2024-11-17T08:27:16.724Z] 3797.80 IOPS, 14.84 MiB/s [2024-11-17T08:27:16.724Z] [2024-11-17 08:27:16.475995] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.712 [2024-11-17 08:27:16.476186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:27:11.712 [2024-11-17 08:27:16.476316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:11.712 [2024-11-17 08:27:16.476397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:11.712 [2024-11-17 08:27:16.476500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:27:11.712 [2024-11-17 08:27:16.476729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:27:11.712 [2024-11-17 08:27:16.476793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:11.712 [2024-11-17 08:27:16.476832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:27:11.712 [2024-11-17 08:27:16.476892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:13.598 3164.83 IOPS, 12.36 MiB/s [2024-11-17T08:27:18.610Z] 2712.71 IOPS, 10.60 MiB/s [2024-11-17T08:27:18.610Z] [2024-11-17 08:27:18.473918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:13.598 [2024-11-17 08:27:18.473979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:27:13.598 [2024-11-17 08:27:18.474010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:27:13.598 [2024-11-17 08:27:18.474029] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:27:13.598 [2024-11-17 08:27:18.474047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:27:14.568 2373.62 IOPS, 9.27 MiB/s 00:27:14.568 Latency(us) 00:27:14.568 [2024-11-17T08:27:19.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.568 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:27:14.568 NVMe0n1 : 8.11 2342.12 9.15 15.79 0.00 54434.96 7106.31 7061253.96 00:27:14.568 [2024-11-17T08:27:19.580Z] =================================================================================================================== 00:27:14.568 [2024-11-17T08:27:19.580Z] Total : 2342.12 9.15 15.79 0.00 54434.96 7106.31 7061253.96 00:27:14.568 { 00:27:14.568 "results": [ 00:27:14.568 { 00:27:14.568 "job": "NVMe0n1", 00:27:14.568 "core_mask": "0x4", 00:27:14.568 "workload": "randread", 00:27:14.568 "status": "finished", 00:27:14.568 "queue_depth": 128, 00:27:14.568 "io_size": 4096, 00:27:14.568 "runtime": 8.107605, 00:27:14.568 "iops": 2342.121995336477, 00:27:14.568 "mibps": 9.148914044283114, 00:27:14.568 "io_failed": 128, 00:27:14.568 "io_timeout": 0, 00:27:14.568 "avg_latency_us": 54434.96061509206, 00:27:14.568 "min_latency_us": 7106.313253012048, 00:27:14.568 "max_latency_us": 7061253.963052209 00:27:14.568 } 00:27:14.568 ], 00:27:14.568 "core_count": 1 00:27:14.568 } 00:27:14.568 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:14.568 Attaching 5 probes... 00:27:14.568 1171.124248: reset bdev controller NVMe0 00:27:14.568 1171.226463: reconnect bdev controller NVMe0 00:27:14.568 3168.318168: reconnect delay bdev controller NVMe0 00:27:14.568 3168.341134: reconnect bdev controller NVMe0 00:27:14.569 5166.423513: reconnect delay bdev controller NVMe0 00:27:14.569 5166.446976: reconnect bdev controller NVMe0 00:27:14.569 7164.473162: reconnect delay bdev controller NVMe0 00:27:14.569 7164.494888: reconnect bdev controller NVMe0 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 89164 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 89148 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 89148 ']' 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 89148 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89148 00:27:14.569 killing process with pid 89148 00:27:14.569 Received shutdown signal, test time was about 8.196905 seconds 00:27:14.569 00:27:14.569 Latency(us) 00:27:14.569 [2024-11-17T08:27:19.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.569 [2024-11-17T08:27:19.581Z] =================================================================================================================== 00:27:14.569 [2024-11-17T08:27:19.581Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89148' 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 89148 00:27:14.569 08:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 89148 00:27:15.949 08:27:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:15.949 08:27:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:27:15.949 08:27:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:27:15.949 08:27:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:15.949 08:27:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:27:16.208 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:16.208 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:16.209 rmmod nvme_tcp 00:27:16.209 rmmod nvme_fabrics 00:27:16.209 rmmod nvme_keyring 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 88700 ']' 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 88700 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88700 ']' 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88700 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88700 00:27:16.209 killing process with pid 88700 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88700' 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88700 00:27:16.209 08:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88700 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:17.591 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:17.851 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:27:17.852 00:27:17.852 real 0m50.570s 00:27:17.852 user 2m22.695s 00:27:17.852 sys 0m7.782s 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.852 ************************************ 00:27:17.852 END TEST nvmf_timeout 00:27:17.852 ************************************ 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:17.852 ************************************ 00:27:17.852 END TEST nvmf_host 00:27:17.852 ************************************ 00:27:17.852 00:27:17.852 real 6m21.336s 00:27:17.852 user 16m46.273s 00:27:17.852 sys 1m40.179s 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.852 08:27:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.852 08:27:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:17.852 08:27:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:27:17.852 00:27:17.852 real 16m35.324s 00:27:17.852 user 41m29.581s 00:27:17.852 sys 4m52.094s 00:27:17.852 08:27:22 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.852 08:27:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.852 ************************************ 00:27:17.852 END TEST nvmf_tcp 00:27:17.852 ************************************ 00:27:18.112 08:27:22 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:27:18.112 08:27:22 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:18.112 08:27:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:18.112 08:27:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.112 08:27:22 -- common/autotest_common.sh@10 -- # set +x 00:27:18.112 ************************************ 00:27:18.112 START TEST nvmf_dif 00:27:18.112 ************************************ 00:27:18.112 08:27:22 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:18.112 * Looking for test storage... 00:27:18.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:18.112 08:27:23 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:18.112 08:27:23 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:27:18.112 08:27:23 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:18.112 08:27:23 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:18.112 08:27:23 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:18.112 08:27:23 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:18.112 08:27:23 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:18.373 08:27:23 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:27:18.373 08:27:23 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:18.374 08:27:23 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:18.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.374 --rc genhtml_branch_coverage=1 00:27:18.374 --rc genhtml_function_coverage=1 00:27:18.374 --rc genhtml_legend=1 00:27:18.374 --rc geninfo_all_blocks=1 00:27:18.374 --rc geninfo_unexecuted_blocks=1 00:27:18.374 00:27:18.374 ' 00:27:18.374 08:27:23 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:18.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.374 --rc genhtml_branch_coverage=1 00:27:18.374 --rc genhtml_function_coverage=1 00:27:18.374 --rc genhtml_legend=1 00:27:18.374 --rc geninfo_all_blocks=1 00:27:18.374 --rc geninfo_unexecuted_blocks=1 00:27:18.374 00:27:18.374 ' 00:27:18.374 08:27:23 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:18.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.374 --rc genhtml_branch_coverage=1 00:27:18.374 --rc genhtml_function_coverage=1 00:27:18.374 --rc genhtml_legend=1 00:27:18.374 --rc geninfo_all_blocks=1 00:27:18.374 --rc geninfo_unexecuted_blocks=1 00:27:18.374 00:27:18.374 ' 00:27:18.374 08:27:23 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:18.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.374 --rc genhtml_branch_coverage=1 00:27:18.374 --rc genhtml_function_coverage=1 00:27:18.374 --rc genhtml_legend=1 00:27:18.374 --rc geninfo_all_blocks=1 00:27:18.374 --rc geninfo_unexecuted_blocks=1 00:27:18.374 00:27:18.374 ' 00:27:18.374 08:27:23 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:18.374 08:27:23 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:27:18.374 08:27:23 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.374 08:27:23 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.374 08:27:23 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.374 08:27:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.374 08:27:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.374 08:27:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.374 08:27:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:18.374 08:27:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:18.374 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:18.374 08:27:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:18.374 08:27:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:18.374 08:27:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:18.374 08:27:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:18.374 08:27:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.374 08:27:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:18.374 08:27:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:18.374 Cannot find device "nvmf_init_br" 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:18.374 Cannot find device "nvmf_init_br2" 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:18.374 Cannot find device "nvmf_tgt_br" 00:27:18.374 08:27:23 nvmf_dif -- nvmf/common.sh@164 -- # true 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:18.375 Cannot find device "nvmf_tgt_br2" 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@165 -- # true 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:18.375 Cannot find device "nvmf_init_br" 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@166 -- # true 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:18.375 Cannot find device "nvmf_init_br2" 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@167 -- # true 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:18.375 Cannot find device "nvmf_tgt_br" 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@168 -- # true 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:18.375 Cannot find device "nvmf_tgt_br2" 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@169 -- # true 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:18.375 Cannot find device "nvmf_br" 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@170 -- # true 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:18.375 Cannot find device "nvmf_init_if" 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@171 -- # true 00:27:18.375 08:27:23 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:18.635 Cannot find device "nvmf_init_if2" 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@172 -- # true 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:18.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@173 -- # true 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:18.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@174 -- # true 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:18.635 08:27:23 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:18.895 08:27:23 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:18.895 08:27:23 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:18.895 08:27:23 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:18.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:18.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:27:18.895 00:27:18.895 --- 10.0.0.3 ping statistics --- 00:27:18.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.895 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:27:18.895 08:27:23 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:18.895 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:18.895 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.110 ms 00:27:18.895 00:27:18.895 --- 10.0.0.4 ping statistics --- 00:27:18.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.895 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:27:18.895 08:27:23 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:18.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:27:18.895 00:27:18.895 --- 10.0.0.1 ping statistics --- 00:27:18.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.895 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:27:18.895 08:27:23 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:18.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:27:18.895 00:27:18.895 --- 10.0.0.2 ping statistics --- 00:27:18.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.895 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:27:18.895 08:27:23 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.895 08:27:23 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:27:18.895 08:27:23 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:27:18.895 08:27:23 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:19.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:19.467 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:19.467 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:19.467 08:27:24 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.467 08:27:24 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:19.467 08:27:24 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:19.467 08:27:24 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.467 08:27:24 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:19.467 08:27:24 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:19.467 08:27:24 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:19.467 08:27:24 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:19.467 08:27:24 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:19.467 08:27:24 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:19.467 08:27:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:19.467 08:27:24 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=89730 00:27:19.467 08:27:24 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 89730 00:27:19.467 08:27:24 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:19.467 08:27:24 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 89730 ']' 00:27:19.467 08:27:24 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.467 08:27:24 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.467 08:27:24 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.467 08:27:24 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.467 08:27:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:19.467 [2024-11-17 08:27:24.458329] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:19.467 [2024-11-17 08:27:24.458435] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.727 [2024-11-17 08:27:24.639385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.986 [2024-11-17 08:27:24.777743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.986 [2024-11-17 08:27:24.777799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.986 [2024-11-17 08:27:24.777816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.986 [2024-11-17 08:27:24.777838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.986 [2024-11-17 08:27:24.777852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.986 [2024-11-17 08:27:24.779072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.246 [2024-11-17 08:27:25.018380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:20.246 08:27:25 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.246 08:27:25 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:27:20.246 08:27:25 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:20.246 08:27:25 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:20.246 08:27:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 08:27:25 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.507 08:27:25 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:20.507 08:27:25 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:20.507 08:27:25 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.507 08:27:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 [2024-11-17 08:27:25.315791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.507 08:27:25 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.507 08:27:25 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:20.507 08:27:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:20.507 08:27:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.507 08:27:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 ************************************ 00:27:20.507 START TEST fio_dif_1_default 00:27:20.507 ************************************ 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 bdev_null0 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:20.507 [2024-11-17 08:27:25.379972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:20.507 { 00:27:20.507 "params": { 00:27:20.507 "name": "Nvme$subsystem", 00:27:20.507 "trtype": "$TEST_TRANSPORT", 00:27:20.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.507 "adrfam": "ipv4", 00:27:20.507 "trsvcid": "$NVMF_PORT", 00:27:20.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.507 "hdgst": ${hdgst:-false}, 00:27:20.507 "ddgst": ${ddgst:-false} 00:27:20.507 }, 00:27:20.507 "method": "bdev_nvme_attach_controller" 00:27:20.507 } 00:27:20.507 EOF 00:27:20.507 )") 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:20.507 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:20.508 08:27:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:20.508 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:20.508 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:27:20.508 08:27:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:27:20.508 08:27:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:27:20.508 08:27:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:20.508 "params": { 00:27:20.508 "name": "Nvme0", 00:27:20.508 "trtype": "tcp", 00:27:20.508 "traddr": "10.0.0.3", 00:27:20.508 "adrfam": "ipv4", 00:27:20.508 "trsvcid": "4420", 00:27:20.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:20.508 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:20.508 "hdgst": false, 00:27:20.508 "ddgst": false 00:27:20.508 }, 00:27:20.508 "method": "bdev_nvme_attach_controller" 00:27:20.508 }' 00:27:20.508 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:20.508 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:20.508 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:27:20.508 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:20.508 08:27:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:20.768 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:20.768 fio-3.35 00:27:20.768 Starting 1 thread 00:27:32.984 00:27:32.984 filename0: (groupid=0, jobs=1): err= 0: pid=89794: Sun Nov 17 08:27:36 2024 00:27:32.984 read: IOPS=10.3k, BW=40.2MiB/s (42.1MB/s)(402MiB/10001msec) 00:27:32.984 slat (nsec): min=6139, max=64040, avg=6783.77, stdev=1345.22 00:27:32.984 clat (usec): min=326, max=3911, avg=369.54, stdev=41.51 00:27:32.984 lat (usec): min=332, max=3919, avg=376.32, stdev=41.69 00:27:32.984 clat percentiles (usec): 00:27:32.984 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 351], 00:27:32.984 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 371], 00:27:32.984 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 396], 95.00th=[ 404], 00:27:32.984 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 766], 99.95th=[ 1188], 00:27:32.984 | 99.99th=[ 1500] 00:27:32.984 bw ( KiB/s): min=39264, max=41760, per=100.00%, avg=41185.89, stdev=620.32, samples=19 00:27:32.984 iops : min= 9816, max=10440, avg=10296.42, stdev=155.09, samples=19 00:27:32.984 lat (usec) : 500=99.69%, 750=0.20%, 1000=0.05% 00:27:32.984 lat (msec) : 2=0.05%, 4=0.01% 00:27:32.984 cpu : usr=80.40%, sys=17.64%, ctx=25, majf=0, minf=1074 00:27:32.984 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.984 issued rwts: total=102904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.984 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:32.984 00:27:32.984 Run status group 0 (all jobs): 00:27:32.984 READ: bw=40.2MiB/s (42.1MB/s), 40.2MiB/s-40.2MiB/s (42.1MB/s-42.1MB/s), io=402MiB (421MB), run=10001-10001msec 00:27:32.984 ----------------------------------------------------- 00:27:32.984 Suppressions used: 00:27:32.984 count bytes template 00:27:32.984 1 8 /usr/src/fio/parse.c 00:27:32.984 1 8 libtcmalloc_minimal.so 00:27:32.984 1 904 libcrypto.so 00:27:32.984 ----------------------------------------------------- 00:27:32.984 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:32.984 ************************************ 00:27:32.984 END TEST fio_dif_1_default 00:27:32.984 ************************************ 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.984 00:27:32.984 real 0m12.566s 00:27:32.984 user 0m9.997s 00:27:32.984 sys 0m2.307s 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:32.984 08:27:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:32.984 08:27:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:32.984 08:27:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:32.985 08:27:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:32.985 08:27:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:32.985 ************************************ 00:27:32.985 START TEST fio_dif_1_multi_subsystems 00:27:32.985 ************************************ 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.985 bdev_null0 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.985 08:27:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.245 [2024-11-17 08:27:38.023716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.245 bdev_null1 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.245 { 00:27:33.245 "params": { 00:27:33.245 "name": "Nvme$subsystem", 00:27:33.245 "trtype": "$TEST_TRANSPORT", 00:27:33.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.245 "adrfam": "ipv4", 00:27:33.245 "trsvcid": "$NVMF_PORT", 00:27:33.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.245 "hdgst": ${hdgst:-false}, 00:27:33.245 "ddgst": ${ddgst:-false} 00:27:33.245 }, 00:27:33.245 "method": "bdev_nvme_attach_controller" 00:27:33.245 } 00:27:33.245 EOF 00:27:33.245 )") 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.245 { 00:27:33.245 "params": { 00:27:33.245 "name": "Nvme$subsystem", 00:27:33.245 "trtype": "$TEST_TRANSPORT", 00:27:33.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.245 "adrfam": "ipv4", 00:27:33.245 "trsvcid": "$NVMF_PORT", 00:27:33.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.245 "hdgst": ${hdgst:-false}, 00:27:33.245 "ddgst": ${ddgst:-false} 00:27:33.245 }, 00:27:33.245 "method": "bdev_nvme_attach_controller" 00:27:33.245 } 00:27:33.245 EOF 00:27:33.245 )") 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:33.245 "params": { 00:27:33.245 "name": "Nvme0", 00:27:33.245 "trtype": "tcp", 00:27:33.245 "traddr": "10.0.0.3", 00:27:33.245 "adrfam": "ipv4", 00:27:33.245 "trsvcid": "4420", 00:27:33.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.245 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:33.245 "hdgst": false, 00:27:33.245 "ddgst": false 00:27:33.245 }, 00:27:33.245 "method": "bdev_nvme_attach_controller" 00:27:33.245 },{ 00:27:33.245 "params": { 00:27:33.245 "name": "Nvme1", 00:27:33.245 "trtype": "tcp", 00:27:33.245 "traddr": "10.0.0.3", 00:27:33.245 "adrfam": "ipv4", 00:27:33.245 "trsvcid": "4420", 00:27:33.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:33.245 "hdgst": false, 00:27:33.245 "ddgst": false 00:27:33.245 }, 00:27:33.245 "method": "bdev_nvme_attach_controller" 00:27:33.245 }' 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:33.245 08:27:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.505 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:33.505 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:33.505 fio-3.35 00:27:33.505 Starting 2 threads 00:27:45.719 00:27:45.719 filename0: (groupid=0, jobs=1): err= 0: pid=89958: Sun Nov 17 08:27:49 2024 00:27:45.719 read: IOPS=5285, BW=20.6MiB/s (21.6MB/s)(206MiB/10001msec) 00:27:45.719 slat (usec): min=6, max=172, avg=13.67, stdev= 5.58 00:27:45.719 clat (usec): min=376, max=2959, avg=718.64, stdev=36.61 00:27:45.719 lat (usec): min=383, max=2972, avg=732.31, stdev=38.20 00:27:45.719 clat percentiles (usec): 00:27:45.719 | 1.00th=[ 635], 5.00th=[ 668], 10.00th=[ 685], 20.00th=[ 701], 00:27:45.719 | 30.00th=[ 709], 40.00th=[ 717], 50.00th=[ 717], 60.00th=[ 725], 00:27:45.719 | 70.00th=[ 734], 80.00th=[ 742], 90.00th=[ 750], 95.00th=[ 766], 00:27:45.719 | 99.00th=[ 783], 99.50th=[ 799], 99.90th=[ 832], 99.95th=[ 873], 00:27:45.719 | 99.99th=[ 1467] 00:27:45.719 bw ( KiB/s): min=20704, max=21376, per=50.09%, avg=21180.05, stdev=223.72, samples=19 00:27:45.719 iops : min= 5176, max= 5344, avg=5295.00, stdev=55.92, samples=19 00:27:45.719 lat (usec) : 500=0.01%, 750=87.70%, 1000=12.26% 00:27:45.719 lat (msec) : 2=0.03%, 4=0.01% 00:27:45.719 cpu : usr=88.78%, sys=9.89%, ctx=12, majf=0, minf=1074 00:27:45.719 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:45.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.719 issued rwts: total=52860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.719 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:45.719 filename1: (groupid=0, jobs=1): err= 0: pid=89959: Sun Nov 17 08:27:49 2024 00:27:45.719 read: IOPS=5285, BW=20.6MiB/s (21.6MB/s)(206MiB/10001msec) 00:27:45.719 slat (nsec): min=6246, max=95624, avg=13319.61, stdev=4358.88 00:27:45.719 clat (usec): min=390, max=2902, avg=719.99, stdev=35.11 00:27:45.719 lat (usec): min=397, max=2916, avg=733.31, stdev=35.81 00:27:45.719 clat percentiles (usec): 00:27:45.719 | 1.00th=[ 668], 5.00th=[ 676], 10.00th=[ 685], 20.00th=[ 701], 00:27:45.719 | 30.00th=[ 701], 40.00th=[ 709], 50.00th=[ 717], 60.00th=[ 725], 00:27:45.719 | 70.00th=[ 734], 80.00th=[ 742], 90.00th=[ 758], 95.00th=[ 766], 00:27:45.719 | 99.00th=[ 791], 99.50th=[ 799], 99.90th=[ 832], 99.95th=[ 979], 00:27:45.719 | 99.99th=[ 1352] 00:27:45.719 bw ( KiB/s): min=20704, max=21376, per=50.08%, avg=21176.68, stdev=225.40, samples=19 00:27:45.719 iops : min= 5176, max= 5344, avg=5294.16, stdev=56.34, samples=19 00:27:45.719 lat (usec) : 500=0.02%, 750=86.99%, 1000=12.94% 00:27:45.719 lat (msec) : 2=0.04%, 4=0.01% 00:27:45.719 cpu : usr=88.18%, sys=10.61%, ctx=7, majf=0, minf=1072 00:27:45.719 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:45.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.719 issued rwts: total=52860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.719 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:45.719 00:27:45.719 Run status group 0 (all jobs): 00:27:45.719 READ: bw=41.3MiB/s (43.3MB/s), 20.6MiB/s-20.6MiB/s (21.6MB/s-21.6MB/s), io=413MiB (433MB), run=10001-10001msec 00:27:45.719 ----------------------------------------------------- 00:27:45.719 Suppressions used: 00:27:45.719 count bytes template 00:27:45.719 2 16 /usr/src/fio/parse.c 00:27:45.719 1 8 libtcmalloc_minimal.so 00:27:45.719 1 904 libcrypto.so 00:27:45.719 ----------------------------------------------------- 00:27:45.719 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:45.980 ************************************ 00:27:45.980 END TEST fio_dif_1_multi_subsystems 00:27:45.980 ************************************ 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.980 00:27:45.980 real 0m12.840s 00:27:45.980 user 0m19.937s 00:27:45.980 sys 0m2.617s 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:45.980 08:27:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:45.980 08:27:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:45.980 08:27:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:45.980 08:27:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.980 08:27:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:45.980 ************************************ 00:27:45.980 START TEST fio_dif_rand_params 00:27:45.980 ************************************ 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.980 bdev_null0 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.980 [2024-11-17 08:27:50.947834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:45.980 { 00:27:45.980 "params": { 00:27:45.980 "name": "Nvme$subsystem", 00:27:45.980 "trtype": "$TEST_TRANSPORT", 00:27:45.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.980 "adrfam": "ipv4", 00:27:45.980 "trsvcid": "$NVMF_PORT", 00:27:45.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.980 "hdgst": ${hdgst:-false}, 00:27:45.980 "ddgst": ${ddgst:-false} 00:27:45.980 }, 00:27:45.980 "method": "bdev_nvme_attach_controller" 00:27:45.980 } 00:27:45.980 EOF 00:27:45.980 )") 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:45.980 08:27:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:45.980 "params": { 00:27:45.980 "name": "Nvme0", 00:27:45.980 "trtype": "tcp", 00:27:45.980 "traddr": "10.0.0.3", 00:27:45.980 "adrfam": "ipv4", 00:27:45.980 "trsvcid": "4420", 00:27:45.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:45.980 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:45.980 "hdgst": false, 00:27:45.980 "ddgst": false 00:27:45.980 }, 00:27:45.980 "method": "bdev_nvme_attach_controller" 00:27:45.980 }' 00:27:46.240 08:27:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:46.240 08:27:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:46.240 08:27:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:46.240 08:27:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:46.240 08:27:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:46.240 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:46.240 ... 00:27:46.240 fio-3.35 00:27:46.240 Starting 3 threads 00:27:52.832 00:27:52.832 filename0: (groupid=0, jobs=1): err= 0: pid=90129: Sun Nov 17 08:27:57 2024 00:27:52.832 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(176MiB/5003msec) 00:27:52.832 slat (nsec): min=6760, max=77702, avg=31928.19, stdev=17358.48 00:27:52.832 clat (usec): min=8482, max=27879, avg=10575.41, stdev=825.86 00:27:52.832 lat (usec): min=8492, max=27898, avg=10607.34, stdev=826.43 00:27:52.832 clat percentiles (usec): 00:27:52.832 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10421], 20.00th=[10421], 00:27:52.832 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10552], 60.00th=[10552], 00:27:52.832 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10683], 95.00th=[10683], 00:27:52.832 | 99.00th=[10814], 99.50th=[10814], 99.90th=[27919], 99.95th=[27919], 00:27:52.832 | 99.99th=[27919] 00:27:52.832 bw ( KiB/s): min=34560, max=36864, per=33.36%, avg=36096.00, stdev=665.11, samples=9 00:27:52.832 iops : min= 270, max= 288, avg=282.00, stdev= 5.20, samples=9 00:27:52.832 lat (msec) : 10=0.43%, 20=99.36%, 50=0.21% 00:27:52.832 cpu : usr=93.40%, sys=6.12%, ctx=7, majf=0, minf=1075 00:27:52.832 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.832 issued rwts: total=1410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.832 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:52.832 filename0: (groupid=0, jobs=1): err= 0: pid=90130: Sun Nov 17 08:27:57 2024 00:27:52.832 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(176MiB/5004msec) 00:27:52.832 slat (nsec): min=6333, max=77519, avg=31556.98, stdev=17566.45 00:27:52.832 clat (usec): min=7033, max=27756, avg=10578.91, stdev=837.08 00:27:52.832 lat (usec): min=7060, max=27772, avg=10610.47, stdev=837.43 00:27:52.832 clat percentiles (usec): 00:27:52.832 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10421], 20.00th=[10421], 00:27:52.832 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10552], 60.00th=[10552], 00:27:52.832 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10683], 95.00th=[10683], 00:27:52.832 | 99.00th=[10814], 99.50th=[12780], 99.90th=[27657], 99.95th=[27657], 00:27:52.832 | 99.99th=[27657] 00:27:52.832 bw ( KiB/s): min=34560, max=36864, per=33.36%, avg=36096.00, stdev=665.11, samples=9 00:27:52.832 iops : min= 270, max= 288, avg=282.00, stdev= 5.20, samples=9 00:27:52.832 lat (msec) : 10=0.21%, 20=99.57%, 50=0.21% 00:27:52.832 cpu : usr=93.90%, sys=5.64%, ctx=6, majf=0, minf=1073 00:27:52.832 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.832 issued rwts: total=1410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.832 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:52.832 filename0: (groupid=0, jobs=1): err= 0: pid=90131: Sun Nov 17 08:27:57 2024 00:27:52.832 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(176MiB/5003msec) 00:27:52.832 slat (usec): min=6, max=110, avg=29.24, stdev=16.85 00:27:52.832 clat (usec): min=7049, max=27755, avg=10581.93, stdev=832.01 00:27:52.832 lat (usec): min=7081, max=27774, avg=10611.18, stdev=832.30 00:27:52.832 clat percentiles (usec): 00:27:52.832 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10421], 20.00th=[10421], 00:27:52.832 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10552], 60.00th=[10552], 00:27:52.832 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10683], 95.00th=[10683], 00:27:52.832 | 99.00th=[10814], 99.50th=[11731], 99.90th=[27657], 99.95th=[27657], 00:27:52.832 | 99.99th=[27657] 00:27:52.832 bw ( KiB/s): min=34629, max=36864, per=33.37%, avg=36103.67, stdev=645.29, samples=9 00:27:52.832 iops : min= 270, max= 288, avg=282.00, stdev= 5.20, samples=9 00:27:52.832 lat (msec) : 10=0.21%, 20=99.57%, 50=0.21% 00:27:52.832 cpu : usr=94.30%, sys=5.16%, ctx=74, majf=0, minf=1075 00:27:52.832 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.832 issued rwts: total=1410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.832 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:52.832 00:27:52.832 Run status group 0 (all jobs): 00:27:52.832 READ: bw=106MiB/s (111MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=529MiB (554MB), run=5003-5004msec 00:27:53.400 ----------------------------------------------------- 00:27:53.400 Suppressions used: 00:27:53.400 count bytes template 00:27:53.400 5 44 /usr/src/fio/parse.c 00:27:53.400 1 8 libtcmalloc_minimal.so 00:27:53.400 1 904 libcrypto.so 00:27:53.400 ----------------------------------------------------- 00:27:53.400 00:27:53.660 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:53.660 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:53.660 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:53.660 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:53.660 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:53.660 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:53.660 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.660 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.660 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 bdev_null0 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 [2024-11-17 08:27:58.480952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 bdev_null1 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 bdev_null2 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.661 { 00:27:53.661 "params": { 00:27:53.661 "name": "Nvme$subsystem", 00:27:53.661 "trtype": "$TEST_TRANSPORT", 00:27:53.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.661 "adrfam": "ipv4", 00:27:53.661 "trsvcid": "$NVMF_PORT", 00:27:53.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.661 "hdgst": ${hdgst:-false}, 00:27:53.661 "ddgst": ${ddgst:-false} 00:27:53.661 }, 00:27:53.661 "method": "bdev_nvme_attach_controller" 00:27:53.661 } 00:27:53.661 EOF 00:27:53.661 )") 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.661 { 00:27:53.661 "params": { 00:27:53.661 "name": "Nvme$subsystem", 00:27:53.661 "trtype": "$TEST_TRANSPORT", 00:27:53.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.661 "adrfam": "ipv4", 00:27:53.661 "trsvcid": "$NVMF_PORT", 00:27:53.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.661 "hdgst": ${hdgst:-false}, 00:27:53.661 "ddgst": ${ddgst:-false} 00:27:53.661 }, 00:27:53.661 "method": "bdev_nvme_attach_controller" 00:27:53.661 } 00:27:53.661 EOF 00:27:53.661 )") 00:27:53.661 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.662 { 00:27:53.662 "params": { 00:27:53.662 "name": "Nvme$subsystem", 00:27:53.662 "trtype": "$TEST_TRANSPORT", 00:27:53.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.662 "adrfam": "ipv4", 00:27:53.662 "trsvcid": "$NVMF_PORT", 00:27:53.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.662 "hdgst": ${hdgst:-false}, 00:27:53.662 "ddgst": ${ddgst:-false} 00:27:53.662 }, 00:27:53.662 "method": "bdev_nvme_attach_controller" 00:27:53.662 } 00:27:53.662 EOF 00:27:53.662 )") 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:53.662 "params": { 00:27:53.662 "name": "Nvme0", 00:27:53.662 "trtype": "tcp", 00:27:53.662 "traddr": "10.0.0.3", 00:27:53.662 "adrfam": "ipv4", 00:27:53.662 "trsvcid": "4420", 00:27:53.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:53.662 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:53.662 "hdgst": false, 00:27:53.662 "ddgst": false 00:27:53.662 }, 00:27:53.662 "method": "bdev_nvme_attach_controller" 00:27:53.662 },{ 00:27:53.662 "params": { 00:27:53.662 "name": "Nvme1", 00:27:53.662 "trtype": "tcp", 00:27:53.662 "traddr": "10.0.0.3", 00:27:53.662 "adrfam": "ipv4", 00:27:53.662 "trsvcid": "4420", 00:27:53.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:53.662 "hdgst": false, 00:27:53.662 "ddgst": false 00:27:53.662 }, 00:27:53.662 "method": "bdev_nvme_attach_controller" 00:27:53.662 },{ 00:27:53.662 "params": { 00:27:53.662 "name": "Nvme2", 00:27:53.662 "trtype": "tcp", 00:27:53.662 "traddr": "10.0.0.3", 00:27:53.662 "adrfam": "ipv4", 00:27:53.662 "trsvcid": "4420", 00:27:53.662 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:53.662 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:53.662 "hdgst": false, 00:27:53.662 "ddgst": false 00:27:53.662 }, 00:27:53.662 "method": "bdev_nvme_attach_controller" 00:27:53.662 }' 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:53.662 08:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.921 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:53.921 ... 00:27:53.921 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:53.921 ... 00:27:53.921 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:53.921 ... 00:27:53.921 fio-3.35 00:27:53.921 Starting 24 threads 00:28:06.129 00:28:06.129 filename0: (groupid=0, jobs=1): err= 0: pid=90231: Sun Nov 17 08:28:10 2024 00:28:06.129 read: IOPS=252, BW=1008KiB/s (1033kB/s)(9.90MiB/10055msec) 00:28:06.129 slat (usec): min=3, max=5226, avg=24.16, stdev=182.96 00:28:06.129 clat (usec): min=1397, max=130430, avg=63210.66, stdev=23433.97 00:28:06.129 lat (usec): min=1411, max=130442, avg=63234.82, stdev=23435.58 00:28:06.129 clat percentiles (usec): 00:28:06.129 | 1.00th=[ 1582], 5.00th=[ 3982], 10.00th=[ 39060], 20.00th=[ 47973], 00:28:06.129 | 30.00th=[ 57934], 40.00th=[ 62129], 50.00th=[ 67634], 60.00th=[ 71828], 00:28:06.129 | 70.00th=[ 71828], 80.00th=[ 74974], 90.00th=[ 89654], 95.00th=[104334], 00:28:06.129 | 99.00th=[109577], 99.50th=[111674], 99.90th=[116917], 99.95th=[116917], 00:28:06.129 | 99.99th=[130548] 00:28:06.129 bw ( KiB/s): min= 720, max= 2549, per=4.37%, avg=1009.55, stdev=373.26, samples=20 00:28:06.129 iops : min= 180, max= 637, avg=252.35, stdev=93.26, samples=20 00:28:06.129 lat (msec) : 2=2.21%, 4=2.92%, 10=0.55%, 20=1.18%, 50=19.96% 00:28:06.129 lat (msec) : 100=66.31%, 250=6.86% 00:28:06.129 cpu : usr=39.80%, sys=2.09%, ctx=1141, majf=0, minf=1075 00:28:06.129 IO depths : 1=0.3%, 2=1.8%, 4=6.1%, 8=76.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:28:06.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.129 complete : 0=0.0%, 4=88.9%, 8=9.7%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.129 issued rwts: total=2535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.129 filename0: (groupid=0, jobs=1): err= 0: pid=90232: Sun Nov 17 08:28:10 2024 00:28:06.129 read: IOPS=236, BW=946KiB/s (969kB/s)(9504KiB/10043msec) 00:28:06.129 slat (usec): min=3, max=12024, avg=53.75, stdev=501.96 00:28:06.129 clat (msec): min=31, max=120, avg=67.29, stdev=17.03 00:28:06.129 lat (msec): min=31, max=120, avg=67.35, stdev=17.02 00:28:06.129 clat percentiles (msec): 00:28:06.129 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 51], 00:28:06.129 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 72], 00:28:06.129 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 88], 95.00th=[ 106], 00:28:06.129 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 120], 99.95th=[ 120], 00:28:06.129 | 99.99th=[ 121] 00:28:06.129 bw ( KiB/s): min= 720, max= 1128, per=4.09%, avg=945.60, stdev=111.63, samples=20 00:28:06.129 iops : min= 180, max= 282, avg=236.40, stdev=27.91, samples=20 00:28:06.129 lat (msec) : 50=19.70%, 100=73.78%, 250=6.52% 00:28:06.129 cpu : usr=33.46%, sys=1.11%, ctx=963, majf=0, minf=1071 00:28:06.129 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:28:06.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.129 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.129 issued rwts: total=2376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.129 filename0: (groupid=0, jobs=1): err= 0: pid=90233: Sun Nov 17 08:28:10 2024 00:28:06.129 read: IOPS=246, BW=987KiB/s (1011kB/s)(9932KiB/10063msec) 00:28:06.129 slat (usec): min=3, max=9026, avg=38.97, stdev=304.96 00:28:06.129 clat (msec): min=20, max=113, avg=64.58, stdev=18.20 00:28:06.129 lat (msec): min=20, max=113, avg=64.62, stdev=18.20 00:28:06.129 clat percentiles (msec): 00:28:06.129 | 1.00th=[ 27], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 48], 00:28:06.129 | 30.00th=[ 53], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 70], 00:28:06.129 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 88], 95.00th=[ 105], 00:28:06.129 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 114], 99.95th=[ 114], 00:28:06.129 | 99.99th=[ 114] 00:28:06.129 bw ( KiB/s): min= 720, max= 1296, per=4.27%, avg=986.80, stdev=131.59, samples=20 00:28:06.129 iops : min= 180, max= 324, avg=246.70, stdev=32.90, samples=20 00:28:06.129 lat (msec) : 50=27.99%, 100=64.92%, 250=7.09% 00:28:06.129 cpu : usr=42.60%, sys=2.14%, ctx=1371, majf=0, minf=1071 00:28:06.129 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=82.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:28:06.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 issued rwts: total=2483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.130 filename0: (groupid=0, jobs=1): err= 0: pid=90234: Sun Nov 17 08:28:10 2024 00:28:06.130 read: IOPS=247, BW=989KiB/s (1013kB/s)(9916KiB/10024msec) 00:28:06.130 slat (usec): min=3, max=8077, avg=56.79, stdev=515.23 00:28:06.130 clat (msec): min=26, max=119, avg=64.43, stdev=17.75 00:28:06.130 lat (msec): min=26, max=119, avg=64.48, stdev=17.74 00:28:06.130 clat percentiles (msec): 00:28:06.130 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:28:06.130 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:28:06.130 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 105], 00:28:06.130 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 121], 99.95th=[ 121], 00:28:06.130 | 99.99th=[ 121] 00:28:06.130 bw ( KiB/s): min= 720, max= 1128, per=4.26%, avg=985.95, stdev=120.97, samples=20 00:28:06.130 iops : min= 180, max= 282, avg=246.45, stdev=30.22, samples=20 00:28:06.130 lat (msec) : 50=29.89%, 100=64.70%, 250=5.41% 00:28:06.130 cpu : usr=34.44%, sys=1.33%, ctx=960, majf=0, minf=1074 00:28:06.130 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:28:06.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 issued rwts: total=2479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.130 filename0: (groupid=0, jobs=1): err= 0: pid=90235: Sun Nov 17 08:28:10 2024 00:28:06.130 read: IOPS=244, BW=978KiB/s (1001kB/s)(9828KiB/10052msec) 00:28:06.130 slat (usec): min=3, max=8060, avg=42.25, stdev=355.12 00:28:06.130 clat (msec): min=16, max=135, avg=65.22, stdev=18.51 00:28:06.130 lat (msec): min=16, max=135, avg=65.26, stdev=18.52 00:28:06.130 clat percentiles (msec): 00:28:06.130 | 1.00th=[ 29], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 48], 00:28:06.130 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 71], 00:28:06.130 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 90], 95.00th=[ 106], 00:28:06.130 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 116], 99.95th=[ 116], 00:28:06.130 | 99.99th=[ 136] 00:28:06.130 bw ( KiB/s): min= 712, max= 1200, per=4.23%, avg=976.10, stdev=139.96, samples=20 00:28:06.130 iops : min= 178, max= 300, avg=244.00, stdev=34.95, samples=20 00:28:06.130 lat (msec) : 20=0.65%, 50=28.41%, 100=63.53%, 250=7.41% 00:28:06.130 cpu : usr=38.53%, sys=1.97%, ctx=1178, majf=0, minf=1071 00:28:06.130 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:28:06.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 issued rwts: total=2457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.130 filename0: (groupid=0, jobs=1): err= 0: pid=90236: Sun Nov 17 08:28:10 2024 00:28:06.130 read: IOPS=242, BW=969KiB/s (993kB/s)(9768KiB/10078msec) 00:28:06.130 slat (usec): min=3, max=8064, avg=43.40, stdev=363.24 00:28:06.130 clat (msec): min=2, max=143, avg=65.65, stdev=20.96 00:28:06.130 lat (msec): min=2, max=143, avg=65.69, stdev=20.95 00:28:06.130 clat percentiles (msec): 00:28:06.130 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 44], 20.00th=[ 48], 00:28:06.130 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:28:06.130 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 93], 95.00th=[ 107], 00:28:06.130 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 120], 99.95th=[ 132], 00:28:06.130 | 99.99th=[ 144] 00:28:06.130 bw ( KiB/s): min= 664, max= 1793, per=4.20%, avg=971.65, stdev=217.95, samples=20 00:28:06.130 iops : min= 166, max= 448, avg=242.90, stdev=54.44, samples=20 00:28:06.130 lat (msec) : 4=0.66%, 10=1.31%, 20=1.80%, 50=20.80%, 100=68.88% 00:28:06.130 lat (msec) : 250=6.55% 00:28:06.130 cpu : usr=35.41%, sys=1.29%, ctx=972, majf=0, minf=1074 00:28:06.130 IO depths : 1=0.2%, 2=0.8%, 4=2.8%, 8=79.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:06.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 issued rwts: total=2442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.130 filename0: (groupid=0, jobs=1): err= 0: pid=90237: Sun Nov 17 08:28:10 2024 00:28:06.130 read: IOPS=235, BW=942KiB/s (965kB/s)(9480KiB/10060msec) 00:28:06.130 slat (usec): min=3, max=8035, avg=23.57, stdev=233.08 00:28:06.130 clat (msec): min=16, max=149, avg=67.67, stdev=19.34 00:28:06.130 lat (msec): min=16, max=149, avg=67.69, stdev=19.35 00:28:06.130 clat percentiles (msec): 00:28:06.130 | 1.00th=[ 21], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 48], 00:28:06.130 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:28:06.130 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 107], 00:28:06.130 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 133], 99.95th=[ 140], 00:28:06.130 | 99.99th=[ 150] 00:28:06.130 bw ( KiB/s): min= 744, max= 1386, per=4.08%, avg=943.30, stdev=141.40, samples=20 00:28:06.130 iops : min= 186, max= 346, avg=235.80, stdev=35.27, samples=20 00:28:06.130 lat (msec) : 20=0.76%, 50=22.03%, 100=69.54%, 250=7.68% 00:28:06.130 cpu : usr=33.11%, sys=1.44%, ctx=892, majf=0, minf=1074 00:28:06.130 IO depths : 1=0.1%, 2=0.5%, 4=2.2%, 8=80.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:28:06.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 complete : 0=0.0%, 4=88.3%, 8=11.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 issued rwts: total=2370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.130 filename0: (groupid=0, jobs=1): err= 0: pid=90238: Sun Nov 17 08:28:10 2024 00:28:06.130 read: IOPS=250, BW=1001KiB/s (1025kB/s)(9.81MiB/10040msec) 00:28:06.130 slat (usec): min=3, max=8060, avg=40.67, stdev=372.64 00:28:06.130 clat (msec): min=16, max=117, avg=63.68, stdev=17.97 00:28:06.130 lat (msec): min=16, max=117, avg=63.72, stdev=17.97 00:28:06.130 clat percentiles (msec): 00:28:06.130 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 48], 00:28:06.130 | 30.00th=[ 48], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:28:06.130 | 70.00th=[ 72], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 105], 00:28:06.130 | 99.00th=[ 110], 99.50th=[ 113], 99.90th=[ 118], 99.95th=[ 118], 00:28:06.130 | 99.99th=[ 118] 00:28:06.130 bw ( KiB/s): min= 712, max= 1176, per=4.32%, avg=998.50, stdev=122.09, samples=20 00:28:06.130 iops : min= 178, max= 294, avg=249.60, stdev=30.50, samples=20 00:28:06.130 lat (msec) : 20=0.08%, 50=32.84%, 100=61.15%, 250=5.93% 00:28:06.130 cpu : usr=33.14%, sys=1.27%, ctx=1041, majf=0, minf=1074 00:28:06.130 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:28:06.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 issued rwts: total=2512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.130 filename1: (groupid=0, jobs=1): err= 0: pid=90239: Sun Nov 17 08:28:10 2024 00:28:06.130 read: IOPS=242, BW=972KiB/s (995kB/s)(9752KiB/10038msec) 00:28:06.130 slat (usec): min=3, max=9078, avg=45.03, stdev=381.70 00:28:06.130 clat (msec): min=25, max=135, avg=65.66, stdev=17.20 00:28:06.130 lat (msec): min=25, max=135, avg=65.70, stdev=17.20 00:28:06.130 clat percentiles (msec): 00:28:06.130 | 1.00th=[ 40], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 48], 00:28:06.130 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 70], 00:28:06.130 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 89], 95.00th=[ 105], 00:28:06.130 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 118], 99.95th=[ 118], 00:28:06.130 | 99.99th=[ 136] 00:28:06.130 bw ( KiB/s): min= 712, max= 1120, per=4.19%, avg=968.85, stdev=112.65, samples=20 00:28:06.130 iops : min= 178, max= 280, avg=242.20, stdev=28.17, samples=20 00:28:06.130 lat (msec) : 50=24.98%, 100=67.76%, 250=7.26% 00:28:06.130 cpu : usr=40.63%, sys=1.52%, ctx=1178, majf=0, minf=1072 00:28:06.130 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:28:06.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 issued rwts: total=2438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.130 filename1: (groupid=0, jobs=1): err= 0: pid=90240: Sun Nov 17 08:28:10 2024 00:28:06.130 read: IOPS=245, BW=980KiB/s (1004kB/s)(9844KiB/10040msec) 00:28:06.130 slat (usec): min=3, max=8052, avg=35.91, stdev=323.70 00:28:06.130 clat (msec): min=24, max=116, avg=65.05, stdev=17.46 00:28:06.130 lat (msec): min=24, max=116, avg=65.09, stdev=17.47 00:28:06.130 clat percentiles (msec): 00:28:06.130 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 48], 00:28:06.130 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:28:06.130 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 87], 95.00th=[ 104], 00:28:06.130 | 99.00th=[ 110], 99.50th=[ 113], 99.90th=[ 116], 99.95th=[ 116], 00:28:06.130 | 99.99th=[ 117] 00:28:06.130 bw ( KiB/s): min= 720, max= 1168, per=4.24%, avg=980.40, stdev=125.62, samples=20 00:28:06.130 iops : min= 180, max= 292, avg=245.10, stdev=31.41, samples=20 00:28:06.130 lat (msec) : 50=26.57%, 100=67.45%, 250=5.97% 00:28:06.130 cpu : usr=37.80%, sys=1.97%, ctx=1258, majf=0, minf=1074 00:28:06.130 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:28:06.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.130 issued rwts: total=2461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.130 filename1: (groupid=0, jobs=1): err= 0: pid=90241: Sun Nov 17 08:28:10 2024 00:28:06.130 read: IOPS=242, BW=971KiB/s (995kB/s)(9784KiB/10074msec) 00:28:06.130 slat (usec): min=3, max=4048, avg=23.98, stdev=133.52 00:28:06.130 clat (msec): min=2, max=134, avg=65.65, stdev=19.83 00:28:06.130 lat (msec): min=2, max=134, avg=65.67, stdev=19.83 00:28:06.130 clat percentiles (msec): 00:28:06.130 | 1.00th=[ 13], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:28:06.130 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 67], 60.00th=[ 71], 00:28:06.131 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 94], 95.00th=[ 107], 00:28:06.131 | 99.00th=[ 113], 99.50th=[ 115], 99.90th=[ 127], 99.95th=[ 130], 00:28:06.131 | 99.99th=[ 136] 00:28:06.131 bw ( KiB/s): min= 712, max= 1617, per=4.20%, avg=971.25, stdev=182.64, samples=20 00:28:06.131 iops : min= 178, max= 404, avg=242.80, stdev=45.61, samples=20 00:28:06.131 lat (msec) : 4=0.65%, 20=0.82%, 50=22.65%, 100=69.34%, 250=6.54% 00:28:06.131 cpu : usr=37.53%, sys=1.73%, ctx=1049, majf=0, minf=1073 00:28:06.131 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.2%, 16=16.4%, 32=0.0%, >=64=0.0% 00:28:06.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 issued rwts: total=2446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.131 filename1: (groupid=0, jobs=1): err= 0: pid=90242: Sun Nov 17 08:28:10 2024 00:28:06.131 read: IOPS=241, BW=965KiB/s (988kB/s)(9700KiB/10055msec) 00:28:06.131 slat (usec): min=3, max=8059, avg=40.88, stdev=348.61 00:28:06.131 clat (msec): min=18, max=133, avg=66.03, stdev=18.99 00:28:06.131 lat (msec): min=18, max=133, avg=66.07, stdev=19.00 00:28:06.131 clat percentiles (msec): 00:28:06.131 | 1.00th=[ 22], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 48], 00:28:06.131 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 71], 00:28:06.131 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 94], 95.00th=[ 107], 00:28:06.131 | 99.00th=[ 111], 99.50th=[ 112], 99.90th=[ 120], 99.95th=[ 126], 00:28:06.131 | 99.99th=[ 134] 00:28:06.131 bw ( KiB/s): min= 712, max= 1416, per=4.18%, avg=966.40, stdev=154.36, samples=20 00:28:06.131 iops : min= 178, max= 354, avg=241.60, stdev=38.59, samples=20 00:28:06.131 lat (msec) : 20=0.16%, 50=24.99%, 100=67.92%, 250=6.93% 00:28:06.131 cpu : usr=33.71%, sys=1.19%, ctx=985, majf=0, minf=1075 00:28:06.131 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.0%, 16=16.5%, 32=0.0%, >=64=0.0% 00:28:06.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 issued rwts: total=2425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.131 filename1: (groupid=0, jobs=1): err= 0: pid=90243: Sun Nov 17 08:28:10 2024 00:28:06.131 read: IOPS=212, BW=849KiB/s (869kB/s)(8548KiB/10070msec) 00:28:06.131 slat (usec): min=3, max=8078, avg=38.99, stdev=354.64 00:28:06.131 clat (msec): min=13, max=145, avg=75.00, stdev=21.34 00:28:06.131 lat (msec): min=13, max=145, avg=75.04, stdev=21.35 00:28:06.131 clat percentiles (msec): 00:28:06.131 | 1.00th=[ 25], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 61], 00:28:06.131 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:28:06.131 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 111], 00:28:06.131 | 99.00th=[ 131], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 146], 00:28:06.131 | 99.99th=[ 146] 00:28:06.131 bw ( KiB/s): min= 624, max= 1264, per=3.67%, avg=848.40, stdev=154.07, samples=20 00:28:06.131 iops : min= 156, max= 316, avg=212.10, stdev=38.52, samples=20 00:28:06.131 lat (msec) : 20=0.09%, 50=11.14%, 100=73.09%, 250=15.68% 00:28:06.131 cpu : usr=37.40%, sys=1.80%, ctx=1154, majf=0, minf=1074 00:28:06.131 IO depths : 1=0.1%, 2=3.4%, 4=14.3%, 8=67.4%, 16=14.8%, 32=0.0%, >=64=0.0% 00:28:06.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 complete : 0=0.0%, 4=91.7%, 8=5.1%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 issued rwts: total=2137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.131 filename1: (groupid=0, jobs=1): err= 0: pid=90244: Sun Nov 17 08:28:10 2024 00:28:06.131 read: IOPS=243, BW=972KiB/s (996kB/s)(9732KiB/10008msec) 00:28:06.131 slat (usec): min=3, max=4053, avg=40.19, stdev=271.37 00:28:06.131 clat (msec): min=9, max=140, avg=65.63, stdev=18.70 00:28:06.131 lat (msec): min=9, max=140, avg=65.67, stdev=18.71 00:28:06.131 clat percentiles (msec): 00:28:06.131 | 1.00th=[ 33], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 48], 00:28:06.131 | 30.00th=[ 53], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 70], 00:28:06.131 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 93], 95.00th=[ 105], 00:28:06.131 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 115], 99.95th=[ 140], 00:28:06.131 | 99.99th=[ 140] 00:28:06.131 bw ( KiB/s): min= 720, max= 1128, per=4.16%, avg=962.26, stdev=137.67, samples=19 00:28:06.131 iops : min= 180, max= 282, avg=240.53, stdev=34.37, samples=19 00:28:06.131 lat (msec) : 10=0.25%, 20=0.25%, 50=26.96%, 100=65.64%, 250=6.91% 00:28:06.131 cpu : usr=44.04%, sys=2.15%, ctx=1429, majf=0, minf=1072 00:28:06.131 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:28:06.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 issued rwts: total=2433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.131 filename1: (groupid=0, jobs=1): err= 0: pid=90245: Sun Nov 17 08:28:10 2024 00:28:06.131 read: IOPS=244, BW=978KiB/s (1001kB/s)(9812KiB/10037msec) 00:28:06.131 slat (usec): min=3, max=8054, avg=44.95, stdev=362.19 00:28:06.131 clat (msec): min=25, max=138, avg=65.26, stdev=18.06 00:28:06.131 lat (msec): min=25, max=138, avg=65.31, stdev=18.05 00:28:06.131 clat percentiles (msec): 00:28:06.131 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:28:06.131 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 71], 00:28:06.131 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 89], 95.00th=[ 105], 00:28:06.131 | 99.00th=[ 111], 99.50th=[ 112], 99.90th=[ 118], 99.95th=[ 118], 00:28:06.131 | 99.99th=[ 138] 00:28:06.131 bw ( KiB/s): min= 712, max= 1152, per=4.22%, avg=974.80, stdev=112.64, samples=20 00:28:06.131 iops : min= 178, max= 288, avg=243.70, stdev=28.16, samples=20 00:28:06.131 lat (msec) : 50=28.37%, 100=64.61%, 250=7.01% 00:28:06.131 cpu : usr=39.06%, sys=1.74%, ctx=1102, majf=0, minf=1074 00:28:06.131 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:28:06.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 issued rwts: total=2453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.131 filename1: (groupid=0, jobs=1): err= 0: pid=90246: Sun Nov 17 08:28:10 2024 00:28:06.131 read: IOPS=237, BW=952KiB/s (974kB/s)(9520KiB/10005msec) 00:28:06.131 slat (usec): min=3, max=8055, avg=36.44, stdev=329.02 00:28:06.131 clat (msec): min=4, max=135, avg=67.06, stdev=18.36 00:28:06.131 lat (msec): min=4, max=135, avg=67.09, stdev=18.36 00:28:06.131 clat percentiles (msec): 00:28:06.131 | 1.00th=[ 33], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 48], 00:28:06.131 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:28:06.131 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 96], 95.00th=[ 107], 00:28:06.131 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 125], 99.95th=[ 136], 00:28:06.131 | 99.99th=[ 136] 00:28:06.131 bw ( KiB/s): min= 720, max= 1072, per=4.07%, avg=941.89, stdev=114.88, samples=19 00:28:06.131 iops : min= 180, max= 268, avg=235.47, stdev=28.72, samples=19 00:28:06.131 lat (msec) : 10=0.38%, 20=0.29%, 50=23.53%, 100=68.53%, 250=7.27% 00:28:06.131 cpu : usr=36.22%, sys=1.65%, ctx=1056, majf=0, minf=1071 00:28:06.131 IO depths : 1=0.1%, 2=0.6%, 4=2.6%, 8=80.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:28:06.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 issued rwts: total=2380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.131 filename2: (groupid=0, jobs=1): err= 0: pid=90247: Sun Nov 17 08:28:10 2024 00:28:06.131 read: IOPS=249, BW=996KiB/s (1020kB/s)(9976KiB/10012msec) 00:28:06.131 slat (usec): min=3, max=9042, avg=34.18, stdev=255.61 00:28:06.131 clat (msec): min=15, max=118, avg=64.05, stdev=17.89 00:28:06.131 lat (msec): min=15, max=118, avg=64.08, stdev=17.88 00:28:06.131 clat percentiles (msec): 00:28:06.131 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 48], 00:28:06.131 | 30.00th=[ 51], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 70], 00:28:06.131 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 87], 95.00th=[ 105], 00:28:06.131 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 120], 99.95th=[ 120], 00:28:06.131 | 99.99th=[ 120] 00:28:06.131 bw ( KiB/s): min= 712, max= 1152, per=4.28%, avg=988.21, stdev=128.70, samples=19 00:28:06.131 iops : min= 178, max= 288, avg=247.05, stdev=32.18, samples=19 00:28:06.131 lat (msec) : 20=0.24%, 50=29.59%, 100=63.71%, 250=6.46% 00:28:06.131 cpu : usr=41.60%, sys=1.80%, ctx=1315, majf=0, minf=1073 00:28:06.131 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:06.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 issued rwts: total=2494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.131 filename2: (groupid=0, jobs=1): err= 0: pid=90248: Sun Nov 17 08:28:10 2024 00:28:06.131 read: IOPS=239, BW=958KiB/s (981kB/s)(9584KiB/10009msec) 00:28:06.131 slat (usec): min=3, max=8059, avg=39.07, stdev=371.81 00:28:06.131 clat (msec): min=11, max=119, avg=66.65, stdev=17.73 00:28:06.131 lat (msec): min=11, max=119, avg=66.69, stdev=17.74 00:28:06.131 clat percentiles (msec): 00:28:06.131 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 48], 00:28:06.131 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 72], 00:28:06.131 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 95], 95.00th=[ 107], 00:28:06.131 | 99.00th=[ 109], 99.50th=[ 114], 99.90th=[ 121], 99.95th=[ 121], 00:28:06.131 | 99.99th=[ 121] 00:28:06.131 bw ( KiB/s): min= 720, max= 1128, per=4.11%, avg=950.68, stdev=112.11, samples=19 00:28:06.131 iops : min= 180, max= 282, avg=237.63, stdev=27.99, samples=19 00:28:06.131 lat (msec) : 20=0.17%, 50=24.87%, 100=69.16%, 250=5.80% 00:28:06.131 cpu : usr=32.46%, sys=1.20%, ctx=976, majf=0, minf=1074 00:28:06.131 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:06.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 complete : 0=0.0%, 4=88.0%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.131 issued rwts: total=2396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.132 filename2: (groupid=0, jobs=1): err= 0: pid=90249: Sun Nov 17 08:28:10 2024 00:28:06.132 read: IOPS=240, BW=960KiB/s (983kB/s)(9608KiB/10004msec) 00:28:06.132 slat (usec): min=3, max=8061, avg=40.06, stdev=320.14 00:28:06.132 clat (msec): min=5, max=134, avg=66.43, stdev=19.16 00:28:06.132 lat (msec): min=5, max=134, avg=66.47, stdev=19.16 00:28:06.132 clat percentiles (msec): 00:28:06.132 | 1.00th=[ 33], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 48], 00:28:06.132 | 30.00th=[ 54], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 71], 00:28:06.132 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 96], 95.00th=[ 107], 00:28:06.132 | 99.00th=[ 111], 99.50th=[ 112], 99.90th=[ 121], 99.95th=[ 134], 00:28:06.132 | 99.99th=[ 134] 00:28:06.132 bw ( KiB/s): min= 712, max= 1104, per=4.10%, avg=946.58, stdev=132.10, samples=19 00:28:06.132 iops : min= 178, max= 276, avg=236.63, stdev=33.03, samples=19 00:28:06.132 lat (msec) : 10=0.42%, 20=0.37%, 50=26.06%, 100=65.36%, 250=7.79% 00:28:06.132 cpu : usr=40.30%, sys=1.81%, ctx=1310, majf=0, minf=1073 00:28:06.132 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.9%, 16=15.3%, 32=0.0%, >=64=0.0% 00:28:06.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.132 complete : 0=0.0%, 4=87.8%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.132 issued rwts: total=2402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.132 filename2: (groupid=0, jobs=1): err= 0: pid=90250: Sun Nov 17 08:28:10 2024 00:28:06.132 read: IOPS=245, BW=981KiB/s (1004kB/s)(9812KiB/10006msec) 00:28:06.132 slat (usec): min=3, max=4034, avg=24.53, stdev=115.27 00:28:06.132 clat (msec): min=9, max=138, avg=65.15, stdev=18.82 00:28:06.132 lat (msec): min=9, max=138, avg=65.17, stdev=18.82 00:28:06.132 clat percentiles (msec): 00:28:06.132 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 48], 00:28:06.132 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:28:06.132 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 94], 95.00th=[ 108], 00:28:06.132 | 99.00th=[ 114], 99.50th=[ 115], 99.90th=[ 136], 99.95th=[ 138], 00:28:06.132 | 99.99th=[ 138] 00:28:06.132 bw ( KiB/s): min= 688, max= 1128, per=4.20%, avg=971.16, stdev=130.28, samples=19 00:28:06.132 iops : min= 172, max= 282, avg=242.74, stdev=32.55, samples=19 00:28:06.132 lat (msec) : 10=0.12%, 20=0.29%, 50=28.70%, 100=64.04%, 250=6.85% 00:28:06.132 cpu : usr=39.65%, sys=1.86%, ctx=1348, majf=0, minf=1074 00:28:06.132 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:28:06.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.132 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.132 issued rwts: total=2453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.132 filename2: (groupid=0, jobs=1): err= 0: pid=90251: Sun Nov 17 08:28:10 2024 00:28:06.132 read: IOPS=238, BW=955KiB/s (977kB/s)(9588KiB/10045msec) 00:28:06.132 slat (usec): min=3, max=8051, avg=38.62, stdev=352.85 00:28:06.132 clat (msec): min=16, max=132, avg=66.80, stdev=17.78 00:28:06.132 lat (msec): min=16, max=132, avg=66.84, stdev=17.79 00:28:06.132 clat percentiles (msec): 00:28:06.132 | 1.00th=[ 23], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 48], 00:28:06.132 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:28:06.132 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 91], 95.00th=[ 108], 00:28:06.132 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 116], 99.95th=[ 117], 00:28:06.132 | 99.99th=[ 133] 00:28:06.132 bw ( KiB/s): min= 720, max= 1152, per=4.13%, avg=954.40, stdev=113.70, samples=20 00:28:06.132 iops : min= 180, max= 288, avg=238.60, stdev=28.43, samples=20 00:28:06.132 lat (msec) : 20=0.75%, 50=22.24%, 100=70.67%, 250=6.34% 00:28:06.132 cpu : usr=32.28%, sys=1.22%, ctx=993, majf=0, minf=1074 00:28:06.132 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:06.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.132 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.132 issued rwts: total=2397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.132 filename2: (groupid=0, jobs=1): err= 0: pid=90252: Sun Nov 17 08:28:10 2024 00:28:06.132 read: IOPS=235, BW=942KiB/s (965kB/s)(9504KiB/10090msec) 00:28:06.132 slat (usec): min=3, max=8061, avg=49.34, stdev=448.39 00:28:06.132 clat (msec): min=16, max=144, avg=67.57, stdev=18.65 00:28:06.132 lat (msec): min=16, max=144, avg=67.62, stdev=18.66 00:28:06.132 clat percentiles (msec): 00:28:06.132 | 1.00th=[ 21], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 50], 00:28:06.132 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 71], 00:28:06.132 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 96], 95.00th=[ 107], 00:28:06.132 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 125], 99.95th=[ 125], 00:28:06.132 | 99.99th=[ 144] 00:28:06.132 bw ( KiB/s): min= 720, max= 1383, per=4.08%, avg=943.25, stdev=143.59, samples=20 00:28:06.132 iops : min= 180, max= 345, avg=235.75, stdev=35.78, samples=20 00:28:06.132 lat (msec) : 20=0.84%, 50=19.65%, 100=72.47%, 250=7.03% 00:28:06.132 cpu : usr=35.29%, sys=1.03%, ctx=1111, majf=0, minf=1073 00:28:06.132 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=79.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:28:06.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.132 complete : 0=0.0%, 4=88.4%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.132 issued rwts: total=2376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.132 filename2: (groupid=0, jobs=1): err= 0: pid=90253: Sun Nov 17 08:28:10 2024 00:28:06.132 read: IOPS=241, BW=966KiB/s (990kB/s)(9712KiB/10049msec) 00:28:06.132 slat (usec): min=3, max=8050, avg=42.96, stdev=325.73 00:28:06.132 clat (msec): min=17, max=119, avg=65.95, stdev=18.41 00:28:06.132 lat (msec): min=17, max=119, avg=65.99, stdev=18.41 00:28:06.132 clat percentiles (msec): 00:28:06.132 | 1.00th=[ 26], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 48], 00:28:06.132 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 71], 00:28:06.132 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 93], 95.00th=[ 105], 00:28:06.132 | 99.00th=[ 111], 99.50th=[ 111], 99.90th=[ 121], 99.95th=[ 121], 00:28:06.132 | 99.99th=[ 121] 00:28:06.132 bw ( KiB/s): min= 712, max= 1176, per=4.17%, avg=964.40, stdev=133.83, samples=20 00:28:06.132 iops : min= 178, max= 294, avg=241.10, stdev=33.46, samples=20 00:28:06.132 lat (msec) : 20=0.08%, 50=25.33%, 100=66.97%, 250=7.62% 00:28:06.132 cpu : usr=42.36%, sys=2.10%, ctx=1817, majf=0, minf=1074 00:28:06.132 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:06.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.132 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.132 issued rwts: total=2428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.132 filename2: (groupid=0, jobs=1): err= 0: pid=90254: Sun Nov 17 08:28:10 2024 00:28:06.132 read: IOPS=249, BW=997KiB/s (1021kB/s)(9992KiB/10021msec) 00:28:06.132 slat (usec): min=3, max=8038, avg=34.99, stdev=311.71 00:28:06.132 clat (msec): min=27, max=119, avg=64.00, stdev=18.02 00:28:06.132 lat (msec): min=27, max=119, avg=64.04, stdev=18.01 00:28:06.132 clat percentiles (msec): 00:28:06.132 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 48], 00:28:06.132 | 30.00th=[ 48], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 70], 00:28:06.132 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 87], 95.00th=[ 105], 00:28:06.132 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 113], 99.95th=[ 113], 00:28:06.132 | 99.99th=[ 121] 00:28:06.132 bw ( KiB/s): min= 720, max= 1176, per=4.30%, avg=994.95, stdev=124.00, samples=20 00:28:06.132 iops : min= 180, max= 294, avg=248.70, stdev=30.99, samples=20 00:28:06.132 lat (msec) : 50=32.39%, 100=61.57%, 250=6.04% 00:28:06.132 cpu : usr=38.74%, sys=1.60%, ctx=1098, majf=0, minf=1072 00:28:06.132 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:06.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.132 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.132 issued rwts: total=2498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:06.132 00:28:06.132 Run status group 0 (all jobs): 00:28:06.132 READ: bw=22.6MiB/s (23.7MB/s), 849KiB/s-1008KiB/s (869kB/s-1033kB/s), io=228MiB (239MB), run=10004-10090msec 00:28:06.701 ----------------------------------------------------- 00:28:06.701 Suppressions used: 00:28:06.701 count bytes template 00:28:06.701 45 402 /usr/src/fio/parse.c 00:28:06.701 1 8 libtcmalloc_minimal.so 00:28:06.701 1 904 libcrypto.so 00:28:06.701 ----------------------------------------------------- 00:28:06.701 00:28:06.701 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:06.701 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:06.701 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.701 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:06.701 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.702 bdev_null0 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.702 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.962 [2024-11-17 08:28:11.715835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:06.962 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.962 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.962 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:06.962 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:06.962 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:06.962 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.962 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.962 bdev_null1 00:28:06.962 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.962 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:06.962 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.963 { 00:28:06.963 "params": { 00:28:06.963 "name": "Nvme$subsystem", 00:28:06.963 "trtype": "$TEST_TRANSPORT", 00:28:06.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.963 "adrfam": "ipv4", 00:28:06.963 "trsvcid": "$NVMF_PORT", 00:28:06.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.963 "hdgst": ${hdgst:-false}, 00:28:06.963 "ddgst": ${ddgst:-false} 00:28:06.963 }, 00:28:06.963 "method": "bdev_nvme_attach_controller" 00:28:06.963 } 00:28:06.963 EOF 00:28:06.963 )") 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:06.963 { 00:28:06.963 "params": { 00:28:06.963 "name": "Nvme$subsystem", 00:28:06.963 "trtype": "$TEST_TRANSPORT", 00:28:06.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.963 "adrfam": "ipv4", 00:28:06.963 "trsvcid": "$NVMF_PORT", 00:28:06.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.963 "hdgst": ${hdgst:-false}, 00:28:06.963 "ddgst": ${ddgst:-false} 00:28:06.963 }, 00:28:06.963 "method": "bdev_nvme_attach_controller" 00:28:06.963 } 00:28:06.963 EOF 00:28:06.963 )") 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:06.963 "params": { 00:28:06.963 "name": "Nvme0", 00:28:06.963 "trtype": "tcp", 00:28:06.963 "traddr": "10.0.0.3", 00:28:06.963 "adrfam": "ipv4", 00:28:06.963 "trsvcid": "4420", 00:28:06.963 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.963 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:06.963 "hdgst": false, 00:28:06.963 "ddgst": false 00:28:06.963 }, 00:28:06.963 "method": "bdev_nvme_attach_controller" 00:28:06.963 },{ 00:28:06.963 "params": { 00:28:06.963 "name": "Nvme1", 00:28:06.963 "trtype": "tcp", 00:28:06.963 "traddr": "10.0.0.3", 00:28:06.963 "adrfam": "ipv4", 00:28:06.963 "trsvcid": "4420", 00:28:06.963 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.963 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:06.963 "hdgst": false, 00:28:06.963 "ddgst": false 00:28:06.963 }, 00:28:06.963 "method": "bdev_nvme_attach_controller" 00:28:06.963 }' 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:06.963 08:28:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:07.223 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:07.223 ... 00:28:07.223 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:07.223 ... 00:28:07.223 fio-3.35 00:28:07.223 Starting 4 threads 00:28:13.792 00:28:13.793 filename0: (groupid=0, jobs=1): err= 0: pid=90405: Sun Nov 17 08:28:18 2024 00:28:13.793 read: IOPS=2397, BW=18.7MiB/s (19.6MB/s)(93.7MiB/5002msec) 00:28:13.793 slat (nsec): min=5827, max=88075, avg=25202.91, stdev=15670.18 00:28:13.793 clat (usec): min=1107, max=7716, avg=3266.29, stdev=855.67 00:28:13.793 lat (usec): min=1131, max=7751, avg=3291.50, stdev=851.08 00:28:13.793 clat percentiles (usec): 00:28:13.793 | 1.00th=[ 1876], 5.00th=[ 2114], 10.00th=[ 2147], 20.00th=[ 2245], 00:28:13.793 | 30.00th=[ 2376], 40.00th=[ 2835], 50.00th=[ 3589], 60.00th=[ 3785], 00:28:13.793 | 70.00th=[ 3982], 80.00th=[ 4047], 90.00th=[ 4178], 95.00th=[ 4293], 00:28:13.793 | 99.00th=[ 4752], 99.50th=[ 4883], 99.90th=[ 5866], 99.95th=[ 7701], 00:28:13.793 | 99.99th=[ 7701] 00:28:13.793 bw ( KiB/s): min=18384, max=20288, per=25.44%, avg=19237.33, stdev=636.74, samples=9 00:28:13.793 iops : min= 2298, max= 2536, avg=2404.67, stdev=79.59, samples=9 00:28:13.793 lat (msec) : 2=1.93%, 4=72.49%, 10=25.57% 00:28:13.793 cpu : usr=95.68%, sys=3.52%, ctx=46, majf=0, minf=1075 00:28:13.793 IO depths : 1=0.7%, 2=5.5%, 4=60.8%, 8=33.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.793 complete : 0=0.0%, 4=97.9%, 8=2.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.793 issued rwts: total=11993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.793 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:13.793 filename0: (groupid=0, jobs=1): err= 0: pid=90406: Sun Nov 17 08:28:18 2024 00:28:13.793 read: IOPS=2112, BW=16.5MiB/s (17.3MB/s)(82.6MiB/5003msec) 00:28:13.793 slat (nsec): min=4739, max=74380, avg=13958.21, stdev=9146.64 00:28:13.793 clat (usec): min=782, max=5726, avg=3740.45, stdev=812.74 00:28:13.793 lat (usec): min=798, max=5746, avg=3754.40, stdev=811.30 00:28:13.793 clat percentiles (usec): 00:28:13.793 | 1.00th=[ 1696], 5.00th=[ 1926], 10.00th=[ 2245], 20.00th=[ 3195], 00:28:13.793 | 30.00th=[ 3654], 40.00th=[ 3851], 50.00th=[ 4113], 60.00th=[ 4228], 00:28:13.793 | 70.00th=[ 4293], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:28:13.793 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5276], 99.95th=[ 5669], 00:28:13.793 | 99.99th=[ 5735] 00:28:13.793 bw ( KiB/s): min=14976, max=20880, per=22.35%, avg=16902.30, stdev=1957.96, samples=10 00:28:13.793 iops : min= 1872, max= 2610, avg=2112.70, stdev=244.81, samples=10 00:28:13.793 lat (usec) : 1000=0.01% 00:28:13.793 lat (msec) : 2=6.06%, 4=38.09%, 10=55.84% 00:28:13.793 cpu : usr=92.36%, sys=6.86%, ctx=8, majf=0, minf=1073 00:28:13.793 IO depths : 1=0.2%, 2=16.4%, 4=54.8%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.793 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.793 issued rwts: total=10570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.793 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:13.793 filename1: (groupid=0, jobs=1): err= 0: pid=90407: Sun Nov 17 08:28:18 2024 00:28:13.793 read: IOPS=2539, BW=19.8MiB/s (20.8MB/s)(99.2MiB/5001msec) 00:28:13.793 slat (nsec): min=6248, max=74617, avg=14350.26, stdev=9070.89 00:28:13.793 clat (usec): min=722, max=7381, avg=3113.93, stdev=927.86 00:28:13.793 lat (usec): min=730, max=7413, avg=3128.28, stdev=926.77 00:28:13.793 clat percentiles (usec): 00:28:13.793 | 1.00th=[ 1188], 5.00th=[ 1844], 10.00th=[ 1975], 20.00th=[ 2180], 00:28:13.793 | 30.00th=[ 2343], 40.00th=[ 2573], 50.00th=[ 3163], 60.00th=[ 3589], 00:28:13.793 | 70.00th=[ 3982], 80.00th=[ 4146], 90.00th=[ 4228], 95.00th=[ 4293], 00:28:13.793 | 99.00th=[ 4424], 99.50th=[ 4555], 99.90th=[ 5080], 99.95th=[ 7308], 00:28:13.793 | 99.99th=[ 7373] 00:28:13.793 bw ( KiB/s): min=16272, max=22032, per=26.70%, avg=20193.78, stdev=1742.94, samples=9 00:28:13.793 iops : min= 2034, max= 2754, avg=2524.22, stdev=217.87, samples=9 00:28:13.793 lat (usec) : 750=0.03%, 1000=0.05% 00:28:13.793 lat (msec) : 2=10.88%, 4=59.34%, 10=29.71% 00:28:13.793 cpu : usr=91.44%, sys=7.64%, ctx=14, majf=0, minf=1075 00:28:13.793 IO depths : 1=0.1%, 2=2.6%, 4=62.3%, 8=35.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.793 complete : 0=0.0%, 4=99.0%, 8=1.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.793 issued rwts: total=12698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.793 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:13.793 filename1: (groupid=0, jobs=1): err= 0: pid=90408: Sun Nov 17 08:28:18 2024 00:28:13.793 read: IOPS=2405, BW=18.8MiB/s (19.7MB/s)(94.0MiB/5001msec) 00:28:13.793 slat (nsec): min=6596, max=90499, avg=25364.36, stdev=15707.49 00:28:13.793 clat (usec): min=614, max=5969, avg=3255.34, stdev=856.89 00:28:13.793 lat (usec): min=621, max=6001, avg=3280.71, stdev=852.55 00:28:13.793 clat percentiles (usec): 00:28:13.793 | 1.00th=[ 1795], 5.00th=[ 2114], 10.00th=[ 2147], 20.00th=[ 2278], 00:28:13.793 | 30.00th=[ 2409], 40.00th=[ 2835], 50.00th=[ 3589], 60.00th=[ 3785], 00:28:13.793 | 70.00th=[ 3949], 80.00th=[ 4015], 90.00th=[ 4178], 95.00th=[ 4293], 00:28:13.793 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5538], 99.95th=[ 5932], 00:28:13.793 | 99.99th=[ 5932] 00:28:13.793 bw ( KiB/s): min=18512, max=20240, per=25.39%, avg=19201.78, stdev=611.75, samples=9 00:28:13.793 iops : min= 2314, max= 2530, avg=2400.22, stdev=76.47, samples=9 00:28:13.793 lat (usec) : 750=0.04%, 1000=0.02% 00:28:13.793 lat (msec) : 2=2.55%, 4=73.10%, 10=24.29% 00:28:13.793 cpu : usr=95.88%, sys=3.34%, ctx=7, majf=0, minf=1061 00:28:13.793 IO depths : 1=0.6%, 2=5.4%, 4=60.8%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.793 complete : 0=0.0%, 4=97.9%, 8=2.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.793 issued rwts: total=12032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.793 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:13.793 00:28:13.793 Run status group 0 (all jobs): 00:28:13.793 READ: bw=73.9MiB/s (77.4MB/s), 16.5MiB/s-19.8MiB/s (17.3MB/s-20.8MB/s), io=369MiB (387MB), run=5001-5003msec 00:28:14.739 ----------------------------------------------------- 00:28:14.739 Suppressions used: 00:28:14.739 count bytes template 00:28:14.739 6 52 /usr/src/fio/parse.c 00:28:14.739 1 8 libtcmalloc_minimal.so 00:28:14.739 1 904 libcrypto.so 00:28:14.739 ----------------------------------------------------- 00:28:14.739 00:28:14.739 08:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:14.739 08:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:14.739 08:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:14.739 08:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:14.739 08:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:14.739 08:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.740 ************************************ 00:28:14.740 END TEST fio_dif_rand_params 00:28:14.740 ************************************ 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.740 00:28:14.740 real 0m28.616s 00:28:14.740 user 2m10.567s 00:28:14.740 sys 0m7.501s 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:14.740 08:28:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.740 08:28:19 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:14.740 08:28:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:14.740 08:28:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:14.740 08:28:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:14.740 ************************************ 00:28:14.740 START TEST fio_dif_digest 00:28:14.740 ************************************ 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:14.740 bdev_null0 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:14.740 [2024-11-17 08:28:19.651204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:14.740 { 00:28:14.740 "params": { 00:28:14.740 "name": "Nvme$subsystem", 00:28:14.740 "trtype": "$TEST_TRANSPORT", 00:28:14.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.740 "adrfam": "ipv4", 00:28:14.740 "trsvcid": "$NVMF_PORT", 00:28:14.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.740 "hdgst": ${hdgst:-false}, 00:28:14.740 "ddgst": ${ddgst:-false} 00:28:14.740 }, 00:28:14.740 "method": "bdev_nvme_attach_controller" 00:28:14.740 } 00:28:14.740 EOF 00:28:14.740 )") 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:14.740 "params": { 00:28:14.740 "name": "Nvme0", 00:28:14.740 "trtype": "tcp", 00:28:14.740 "traddr": "10.0.0.3", 00:28:14.740 "adrfam": "ipv4", 00:28:14.740 "trsvcid": "4420", 00:28:14.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:14.740 "hdgst": true, 00:28:14.740 "ddgst": true 00:28:14.740 }, 00:28:14.740 "method": "bdev_nvme_attach_controller" 00:28:14.740 }' 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:14.740 08:28:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:15.000 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:15.000 ... 00:28:15.000 fio-3.35 00:28:15.000 Starting 3 threads 00:28:27.296 00:28:27.296 filename0: (groupid=0, jobs=1): err= 0: pid=90518: Sun Nov 17 08:28:30 2024 00:28:27.296 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(317MiB/10002msec) 00:28:27.296 slat (nsec): min=6484, max=36414, avg=11404.12, stdev=5078.65 00:28:27.296 clat (usec): min=10640, max=15119, avg=11793.52, stdev=161.84 00:28:27.296 lat (usec): min=10648, max=15135, avg=11804.92, stdev=162.10 00:28:27.296 clat percentiles (usec): 00:28:27.296 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11731], 20.00th=[11731], 00:28:27.296 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:28:27.296 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11863], 95.00th=[11994], 00:28:27.296 | 99.00th=[12125], 99.50th=[12125], 99.90th=[15139], 99.95th=[15139], 00:28:27.296 | 99.99th=[15139] 00:28:27.296 bw ( KiB/s): min=32256, max=33024, per=33.36%, avg=32498.53, stdev=366.77, samples=19 00:28:27.296 iops : min= 252, max= 258, avg=253.89, stdev= 2.87, samples=19 00:28:27.296 lat (msec) : 20=100.00% 00:28:27.296 cpu : usr=93.91%, sys=5.62%, ctx=17, majf=0, minf=1075 00:28:27.296 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:27.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 issued rwts: total=2538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.296 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:27.296 filename0: (groupid=0, jobs=1): err= 0: pid=90519: Sun Nov 17 08:28:30 2024 00:28:27.296 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(317MiB/10003msec) 00:28:27.296 slat (nsec): min=6432, max=41871, avg=13333.62, stdev=8457.70 00:28:27.296 clat (usec): min=9372, max=15235, avg=11788.17, stdev=182.91 00:28:27.296 lat (usec): min=9379, max=15259, avg=11801.50, stdev=183.52 00:28:27.296 clat percentiles (usec): 00:28:27.296 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11731], 20.00th=[11731], 00:28:27.296 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:28:27.296 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11863], 95.00th=[11994], 00:28:27.296 | 99.00th=[12125], 99.50th=[12125], 99.90th=[15270], 99.95th=[15270], 00:28:27.296 | 99.99th=[15270] 00:28:27.296 bw ( KiB/s): min=32256, max=33024, per=33.31%, avg=32458.11, stdev=347.45, samples=19 00:28:27.296 iops : min= 252, max= 258, avg=253.58, stdev= 2.71, samples=19 00:28:27.296 lat (msec) : 10=0.12%, 20=99.88% 00:28:27.296 cpu : usr=94.00%, sys=5.54%, ctx=46, majf=0, minf=1072 00:28:27.296 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:27.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 issued rwts: total=2538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.296 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:27.296 filename0: (groupid=0, jobs=1): err= 0: pid=90520: Sun Nov 17 08:28:30 2024 00:28:27.296 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(317MiB/10001msec) 00:28:27.296 slat (usec): min=6, max=112, avg=17.96, stdev=12.47 00:28:27.296 clat (usec): min=6617, max=15184, avg=11773.57, stdev=248.03 00:28:27.296 lat (usec): min=6626, max=15218, avg=11791.53, stdev=248.90 00:28:27.296 clat percentiles (usec): 00:28:27.296 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11731], 20.00th=[11731], 00:28:27.296 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:28:27.296 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11863], 95.00th=[11994], 00:28:27.296 | 99.00th=[12125], 99.50th=[12125], 99.90th=[15139], 99.95th=[15139], 00:28:27.296 | 99.99th=[15139] 00:28:27.296 bw ( KiB/s): min=32256, max=33024, per=33.31%, avg=32458.11, stdev=347.45, samples=19 00:28:27.296 iops : min= 252, max= 258, avg=253.58, stdev= 2.71, samples=19 00:28:27.296 lat (msec) : 10=0.12%, 20=99.88% 00:28:27.296 cpu : usr=93.72%, sys=5.72%, ctx=7, majf=0, minf=1074 00:28:27.296 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:27.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 issued rwts: total=2538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.296 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:27.296 00:28:27.296 Run status group 0 (all jobs): 00:28:27.296 READ: bw=95.1MiB/s (99.8MB/s), 31.7MiB/s-31.7MiB/s (33.3MB/s-33.3MB/s), io=952MiB (998MB), run=10001-10003msec 00:28:27.296 ----------------------------------------------------- 00:28:27.296 Suppressions used: 00:28:27.296 count bytes template 00:28:27.296 5 44 /usr/src/fio/parse.c 00:28:27.296 1 8 libtcmalloc_minimal.so 00:28:27.296 1 904 libcrypto.so 00:28:27.296 ----------------------------------------------------- 00:28:27.296 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 ************************************ 00:28:27.297 END TEST fio_dif_digest 00:28:27.297 ************************************ 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.297 00:28:27.297 real 0m12.652s 00:28:27.297 user 0m30.264s 00:28:27.297 sys 0m2.227s 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.297 08:28:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:27.557 08:28:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:27.557 08:28:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:27.557 08:28:32 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:27.557 08:28:32 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:28:27.557 08:28:32 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:27.557 08:28:32 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:28:27.557 08:28:32 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:27.557 08:28:32 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:27.557 rmmod nvme_tcp 00:28:27.557 rmmod nvme_fabrics 00:28:27.557 rmmod nvme_keyring 00:28:27.557 08:28:32 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:27.557 08:28:32 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:28:27.557 08:28:32 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:28:27.557 08:28:32 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 89730 ']' 00:28:27.557 08:28:32 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 89730 00:28:27.557 08:28:32 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 89730 ']' 00:28:27.557 08:28:32 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 89730 00:28:27.557 08:28:32 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:28:27.557 08:28:32 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.557 08:28:32 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89730 00:28:27.557 08:28:32 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:27.557 killing process with pid 89730 00:28:27.557 08:28:32 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:27.557 08:28:32 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89730' 00:28:27.557 08:28:32 nvmf_dif -- common/autotest_common.sh@973 -- # kill 89730 00:28:27.557 08:28:32 nvmf_dif -- common/autotest_common.sh@978 -- # wait 89730 00:28:28.938 08:28:33 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:28.938 08:28:33 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:29.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:29.198 Waiting for block devices as requested 00:28:29.457 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:29.457 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:29.717 08:28:34 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:29.977 08:28:34 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:29.977 08:28:34 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.977 08:28:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:29.977 08:28:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.977 08:28:34 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:28:29.977 00:28:29.977 real 1m11.867s 00:28:29.977 user 4m9.507s 00:28:29.977 sys 0m21.224s 00:28:29.977 ************************************ 00:28:29.977 END TEST nvmf_dif 00:28:29.977 ************************************ 00:28:29.977 08:28:34 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.977 08:28:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:29.977 08:28:34 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:29.977 08:28:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:29.977 08:28:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.977 08:28:34 -- common/autotest_common.sh@10 -- # set +x 00:28:29.977 ************************************ 00:28:29.977 START TEST nvmf_abort_qd_sizes 00:28:29.977 ************************************ 00:28:29.977 08:28:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:30.238 * Looking for test storage... 00:28:30.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:30.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.238 --rc genhtml_branch_coverage=1 00:28:30.238 --rc genhtml_function_coverage=1 00:28:30.238 --rc genhtml_legend=1 00:28:30.238 --rc geninfo_all_blocks=1 00:28:30.238 --rc geninfo_unexecuted_blocks=1 00:28:30.238 00:28:30.238 ' 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:30.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.238 --rc genhtml_branch_coverage=1 00:28:30.238 --rc genhtml_function_coverage=1 00:28:30.238 --rc genhtml_legend=1 00:28:30.238 --rc geninfo_all_blocks=1 00:28:30.238 --rc geninfo_unexecuted_blocks=1 00:28:30.238 00:28:30.238 ' 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:30.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.238 --rc genhtml_branch_coverage=1 00:28:30.238 --rc genhtml_function_coverage=1 00:28:30.238 --rc genhtml_legend=1 00:28:30.238 --rc geninfo_all_blocks=1 00:28:30.238 --rc geninfo_unexecuted_blocks=1 00:28:30.238 00:28:30.238 ' 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:30.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.238 --rc genhtml_branch_coverage=1 00:28:30.238 --rc genhtml_function_coverage=1 00:28:30.238 --rc genhtml_legend=1 00:28:30.238 --rc geninfo_all_blocks=1 00:28:30.238 --rc geninfo_unexecuted_blocks=1 00:28:30.238 00:28:30.238 ' 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.238 08:28:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:30.239 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:30.239 Cannot find device "nvmf_init_br" 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:30.239 Cannot find device "nvmf_init_br2" 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:30.239 Cannot find device "nvmf_tgt_br" 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:30.239 Cannot find device "nvmf_tgt_br2" 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:28:30.239 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:30.499 Cannot find device "nvmf_init_br" 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:30.499 Cannot find device "nvmf_init_br2" 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:30.499 Cannot find device "nvmf_tgt_br" 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:30.499 Cannot find device "nvmf_tgt_br2" 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:30.499 Cannot find device "nvmf_br" 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:30.499 Cannot find device "nvmf_init_if" 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:30.499 Cannot find device "nvmf_init_if2" 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:30.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:30.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:30.499 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:30.759 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:30.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:28:30.759 00:28:30.759 --- 10.0.0.3 ping statistics --- 00:28:30.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.759 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:30.759 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:30.759 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 00:28:30.759 00:28:30.759 --- 10.0.0.4 ping statistics --- 00:28:30.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.759 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:30.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:28:30.759 00:28:30.759 --- 10.0.0.1 ping statistics --- 00:28:30.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.759 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:30.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:28:30.759 00:28:30.759 --- 10.0.0.2 ping statistics --- 00:28:30.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.759 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:28:30.759 08:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:31.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:31.957 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:31.957 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:31.957 08:28:36 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.957 08:28:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:31.957 08:28:36 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:31.957 08:28:36 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.957 08:28:36 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:31.957 08:28:36 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=91199 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 91199 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 91199 ']' 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.958 08:28:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:32.217 [2024-11-17 08:28:37.037268] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:32.217 [2024-11-17 08:28:37.037374] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.476 [2024-11-17 08:28:37.228954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:32.476 [2024-11-17 08:28:37.338141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.476 [2024-11-17 08:28:37.338194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.477 [2024-11-17 08:28:37.338210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.477 [2024-11-17 08:28:37.338221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.477 [2024-11-17 08:28:37.338233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.477 [2024-11-17 08:28:37.340201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.477 [2024-11-17 08:28:37.340391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.477 [2024-11-17 08:28:37.340577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.477 [2024-11-17 08:28:37.340634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.736 [2024-11-17 08:28:37.546323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.996 08:28:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:32.996 ************************************ 00:28:32.996 START TEST spdk_target_abort 00:28:32.996 ************************************ 00:28:32.996 08:28:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:28:32.996 08:28:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:32.996 08:28:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:32.996 08:28:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.996 08:28:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:33.256 spdk_targetn1 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:33.256 [2024-11-17 08:28:38.042938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:33.256 [2024-11-17 08:28:38.101329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:33.256 08:28:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:36.549 Initializing NVMe Controllers 00:28:36.549 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:36.549 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:36.549 Initialization complete. Launching workers. 00:28:36.549 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10776, failed: 0 00:28:36.549 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1051, failed to submit 9725 00:28:36.549 success 841, unsuccessful 210, failed 0 00:28:36.549 08:28:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:36.549 08:28:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:40.742 Initializing NVMe Controllers 00:28:40.742 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:40.742 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:40.742 Initialization complete. Launching workers. 00:28:40.742 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8952, failed: 0 00:28:40.742 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1221, failed to submit 7731 00:28:40.742 success 322, unsuccessful 899, failed 0 00:28:40.742 08:28:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:40.742 08:28:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:43.278 Initializing NVMe Controllers 00:28:43.278 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:43.278 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:43.278 Initialization complete. Launching workers. 00:28:43.278 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31986, failed: 0 00:28:43.278 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2328, failed to submit 29658 00:28:43.278 success 476, unsuccessful 1852, failed 0 00:28:43.278 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:43.278 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.278 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:43.278 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.278 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:43.278 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.278 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:43.847 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.847 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 91199 00:28:43.847 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 91199 ']' 00:28:43.847 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 91199 00:28:43.847 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:28:43.847 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.847 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91199 00:28:43.847 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:43.847 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:43.847 killing process with pid 91199 00:28:43.847 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91199' 00:28:43.847 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 91199 00:28:43.847 08:28:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 91199 00:28:44.783 00:28:44.783 real 0m11.764s 00:28:44.783 user 0m45.745s 00:28:44.783 sys 0m2.758s 00:28:44.783 08:28:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:44.783 08:28:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.783 ************************************ 00:28:44.783 END TEST spdk_target_abort 00:28:44.783 ************************************ 00:28:45.043 08:28:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:45.043 08:28:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:45.043 08:28:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.043 08:28:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:45.043 ************************************ 00:28:45.043 START TEST kernel_target_abort 00:28:45.043 ************************************ 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:45.043 08:28:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:45.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:45.612 Waiting for block devices as requested 00:28:45.612 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:45.871 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:46.131 No valid GPT data, bailing 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:28:46.131 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:46.391 No valid GPT data, bailing 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:46.391 No valid GPT data, bailing 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:46.391 No valid GPT data, bailing 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:28:46.391 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 --hostid=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 -a 10.0.0.1 -t tcp -s 4420 00:28:46.652 00:28:46.652 Discovery Log Number of Records 2, Generation counter 2 00:28:46.652 =====Discovery Log Entry 0====== 00:28:46.652 trtype: tcp 00:28:46.652 adrfam: ipv4 00:28:46.652 subtype: current discovery subsystem 00:28:46.652 treq: not specified, sq flow control disable supported 00:28:46.652 portid: 1 00:28:46.652 trsvcid: 4420 00:28:46.652 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:46.652 traddr: 10.0.0.1 00:28:46.652 eflags: none 00:28:46.652 sectype: none 00:28:46.652 =====Discovery Log Entry 1====== 00:28:46.652 trtype: tcp 00:28:46.652 adrfam: ipv4 00:28:46.652 subtype: nvme subsystem 00:28:46.652 treq: not specified, sq flow control disable supported 00:28:46.652 portid: 1 00:28:46.652 trsvcid: 4420 00:28:46.652 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:46.652 traddr: 10.0.0.1 00:28:46.652 eflags: none 00:28:46.652 sectype: none 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:46.652 08:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:49.945 Initializing NVMe Controllers 00:28:49.945 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:49.945 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:49.945 Initialization complete. Launching workers. 00:28:49.945 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34574, failed: 0 00:28:49.945 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34574, failed to submit 0 00:28:49.945 success 0, unsuccessful 34574, failed 0 00:28:49.945 08:28:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:49.945 08:28:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:53.234 Initializing NVMe Controllers 00:28:53.234 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:53.234 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:53.234 Initialization complete. Launching workers. 00:28:53.234 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66434, failed: 0 00:28:53.234 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36528, failed to submit 29906 00:28:53.234 success 0, unsuccessful 36528, failed 0 00:28:53.234 08:28:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:53.234 08:28:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:56.525 Initializing NVMe Controllers 00:28:56.525 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:56.525 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:56.525 Initialization complete. Launching workers. 00:28:56.525 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 88728, failed: 0 00:28:56.525 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22108, failed to submit 66620 00:28:56.525 success 0, unsuccessful 22108, failed 0 00:28:56.525 08:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:56.525 08:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:56.525 08:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:28:56.525 08:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:56.525 08:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:56.525 08:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:56.525 08:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:56.525 08:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:56.525 08:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:56.525 08:29:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:57.463 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:58.840 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:58.840 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:58.840 00:28:58.840 real 0m13.762s 00:28:58.840 user 0m6.498s 00:28:58.840 sys 0m4.576s 00:28:58.841 08:29:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.841 08:29:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:58.841 ************************************ 00:28:58.841 END TEST kernel_target_abort 00:28:58.841 ************************************ 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:58.841 rmmod nvme_tcp 00:28:58.841 rmmod nvme_fabrics 00:28:58.841 rmmod nvme_keyring 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 91199 ']' 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 91199 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 91199 ']' 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 91199 00:28:58.841 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91199) - No such process 00:28:58.841 Process with pid 91199 is not found 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 91199 is not found' 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:58.841 08:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:59.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:59.409 Waiting for block devices as requested 00:28:59.409 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:59.669 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:59.669 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:59.988 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:59.988 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:59.988 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:59.989 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:59.989 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:59.989 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:59.989 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:59.989 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:59.989 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.989 08:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:59.989 08:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.989 08:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:28:59.989 00:28:59.989 real 0m30.024s 00:28:59.989 user 0m53.657s 00:28:59.989 sys 0m9.559s 00:28:59.989 08:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.989 08:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:59.989 ************************************ 00:28:59.989 END TEST nvmf_abort_qd_sizes 00:28:59.989 ************************************ 00:28:59.989 08:29:04 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:59.989 08:29:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:59.989 08:29:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.989 08:29:04 -- common/autotest_common.sh@10 -- # set +x 00:29:00.312 ************************************ 00:29:00.312 START TEST keyring_file 00:29:00.312 ************************************ 00:29:00.312 08:29:04 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:00.312 * Looking for test storage... 00:29:00.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:00.312 08:29:05 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:00.312 08:29:05 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:29:00.312 08:29:05 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:00.312 08:29:05 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@345 -- # : 1 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@353 -- # local d=1 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@355 -- # echo 1 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@353 -- # local d=2 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@355 -- # echo 2 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.312 08:29:05 keyring_file -- scripts/common.sh@368 -- # return 0 00:29:00.312 08:29:05 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.312 08:29:05 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:00.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.312 --rc genhtml_branch_coverage=1 00:29:00.312 --rc genhtml_function_coverage=1 00:29:00.312 --rc genhtml_legend=1 00:29:00.312 --rc geninfo_all_blocks=1 00:29:00.312 --rc geninfo_unexecuted_blocks=1 00:29:00.312 00:29:00.312 ' 00:29:00.312 08:29:05 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:00.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.312 --rc genhtml_branch_coverage=1 00:29:00.312 --rc genhtml_function_coverage=1 00:29:00.312 --rc genhtml_legend=1 00:29:00.312 --rc geninfo_all_blocks=1 00:29:00.312 --rc geninfo_unexecuted_blocks=1 00:29:00.312 00:29:00.312 ' 00:29:00.312 08:29:05 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:00.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.312 --rc genhtml_branch_coverage=1 00:29:00.312 --rc genhtml_function_coverage=1 00:29:00.312 --rc genhtml_legend=1 00:29:00.312 --rc geninfo_all_blocks=1 00:29:00.312 --rc geninfo_unexecuted_blocks=1 00:29:00.312 00:29:00.312 ' 00:29:00.312 08:29:05 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:00.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.312 --rc genhtml_branch_coverage=1 00:29:00.312 --rc genhtml_function_coverage=1 00:29:00.312 --rc genhtml_legend=1 00:29:00.312 --rc geninfo_all_blocks=1 00:29:00.312 --rc geninfo_unexecuted_blocks=1 00:29:00.312 00:29:00.312 ' 00:29:00.312 08:29:05 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:00.312 08:29:05 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:00.312 08:29:05 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:00.312 08:29:05 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.312 08:29:05 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.312 08:29:05 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.312 08:29:05 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:00.313 08:29:05 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.313 08:29:05 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.313 08:29:05 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.313 08:29:05 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.313 08:29:05 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.313 08:29:05 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.313 08:29:05 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.313 08:29:05 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:00.313 08:29:05 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@51 -- # : 0 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:00.313 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:00.313 08:29:05 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:00.313 08:29:05 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:00.313 08:29:05 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:00.313 08:29:05 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:00.313 08:29:05 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:00.313 08:29:05 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lUfULGJ1Z5 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:29:00.313 08:29:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lUfULGJ1Z5 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lUfULGJ1Z5 00:29:00.313 08:29:05 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.lUfULGJ1Z5 00:29:00.313 08:29:05 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:00.313 08:29:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:00.572 08:29:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rzdfFh7EdG 00:29:00.572 08:29:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:00.572 08:29:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:00.572 08:29:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:29:00.572 08:29:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:00.572 08:29:05 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:29:00.572 08:29:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:29:00.572 08:29:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:29:00.572 08:29:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rzdfFh7EdG 00:29:00.572 08:29:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rzdfFh7EdG 00:29:00.572 08:29:05 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.rzdfFh7EdG 00:29:00.572 08:29:05 keyring_file -- keyring/file.sh@30 -- # tgtpid=92242 00:29:00.572 08:29:05 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:00.572 08:29:05 keyring_file -- keyring/file.sh@32 -- # waitforlisten 92242 00:29:00.572 08:29:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 92242 ']' 00:29:00.572 08:29:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.572 08:29:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.572 08:29:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.572 08:29:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.572 08:29:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:00.572 [2024-11-17 08:29:05.498711] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:00.572 [2024-11-17 08:29:05.498846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92242 ] 00:29:00.832 [2024-11-17 08:29:05.684002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.832 [2024-11-17 08:29:05.817777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.400 [2024-11-17 08:29:06.115910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:01.969 08:29:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.969 08:29:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:01.969 08:29:06 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:01.969 08:29:06 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.969 08:29:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:01.969 [2024-11-17 08:29:06.820062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.969 null0 00:29:01.969 [2024-11-17 08:29:06.852008] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:01.969 [2024-11-17 08:29:06.852307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:01.969 08:29:06 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.969 08:29:06 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:01.969 08:29:06 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:01.969 08:29:06 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:01.969 08:29:06 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:01.969 08:29:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:01.969 08:29:06 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:01.969 08:29:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:01.970 [2024-11-17 08:29:06.883936] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:01.970 request: 00:29:01.970 { 00:29:01.970 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.970 "secure_channel": false, 00:29:01.970 "listen_address": { 00:29:01.970 "trtype": "tcp", 00:29:01.970 "traddr": "127.0.0.1", 00:29:01.970 "trsvcid": "4420" 00:29:01.970 }, 00:29:01.970 "method": "nvmf_subsystem_add_listener", 00:29:01.970 "req_id": 1 00:29:01.970 } 00:29:01.970 Got JSON-RPC error response 00:29:01.970 response: 00:29:01.970 { 00:29:01.970 "code": -32602, 00:29:01.970 "message": "Invalid parameters" 00:29:01.970 } 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:01.970 08:29:06 keyring_file -- keyring/file.sh@47 -- # bperfpid=92265 00:29:01.970 08:29:06 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:01.970 08:29:06 keyring_file -- keyring/file.sh@49 -- # waitforlisten 92265 /var/tmp/bperf.sock 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 92265 ']' 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.970 08:29:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:02.229 [2024-11-17 08:29:06.998502] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:02.229 [2024-11-17 08:29:06.998653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92265 ] 00:29:02.229 [2024-11-17 08:29:07.182913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.489 [2024-11-17 08:29:07.325690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.748 [2024-11-17 08:29:07.555901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:03.007 08:29:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.007 08:29:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:03.007 08:29:07 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lUfULGJ1Z5 00:29:03.007 08:29:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lUfULGJ1Z5 00:29:03.007 08:29:08 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rzdfFh7EdG 00:29:03.007 08:29:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rzdfFh7EdG 00:29:03.266 08:29:08 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:29:03.266 08:29:08 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:03.266 08:29:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.266 08:29:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.266 08:29:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:03.525 08:29:08 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.lUfULGJ1Z5 == \/\t\m\p\/\t\m\p\.\l\U\f\U\L\G\J\1\Z\5 ]] 00:29:03.525 08:29:08 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:29:03.525 08:29:08 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:29:03.525 08:29:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.525 08:29:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.525 08:29:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:03.784 08:29:08 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.rzdfFh7EdG == \/\t\m\p\/\t\m\p\.\r\z\d\f\F\h\7\E\d\G ]] 00:29:03.784 08:29:08 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:29:03.784 08:29:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:03.784 08:29:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:03.784 08:29:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.784 08:29:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.784 08:29:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:04.043 08:29:08 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:04.043 08:29:08 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:29:04.043 08:29:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:04.043 08:29:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.043 08:29:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.043 08:29:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.043 08:29:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:04.303 08:29:09 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:29:04.303 08:29:09 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:04.303 08:29:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:04.303 [2024-11-17 08:29:09.261058] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:04.563 nvme0n1 00:29:04.563 08:29:09 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:29:04.563 08:29:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:04.563 08:29:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.563 08:29:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.563 08:29:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.563 08:29:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:04.822 08:29:09 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:29:04.822 08:29:09 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:29:04.822 08:29:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.822 08:29:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:04.822 08:29:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.822 08:29:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.822 08:29:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:04.822 08:29:09 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:29:04.822 08:29:09 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.081 Running I/O for 1 seconds... 00:29:06.018 11589.00 IOPS, 45.27 MiB/s 00:29:06.018 Latency(us) 00:29:06.018 [2024-11-17T08:29:11.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.018 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:06.018 nvme0n1 : 1.01 11591.78 45.28 0.00 0.00 10982.02 9159.25 20845.19 00:29:06.018 [2024-11-17T08:29:11.030Z] =================================================================================================================== 00:29:06.018 [2024-11-17T08:29:11.030Z] Total : 11591.78 45.28 0.00 0.00 10982.02 9159.25 20845.19 00:29:06.018 { 00:29:06.018 "results": [ 00:29:06.018 { 00:29:06.018 "job": "nvme0n1", 00:29:06.018 "core_mask": "0x2", 00:29:06.018 "workload": "randrw", 00:29:06.018 "percentage": 50, 00:29:06.018 "status": "finished", 00:29:06.018 "queue_depth": 128, 00:29:06.018 "io_size": 4096, 00:29:06.018 "runtime": 1.010889, 00:29:06.018 "iops": 11591.777138736301, 00:29:06.018 "mibps": 45.280379448188675, 00:29:06.018 "io_failed": 0, 00:29:06.018 "io_timeout": 0, 00:29:06.018 "avg_latency_us": 10982.015093930937, 00:29:06.019 "min_latency_us": 9159.248192771085, 00:29:06.019 "max_latency_us": 20845.185542168674 00:29:06.019 } 00:29:06.019 ], 00:29:06.019 "core_count": 1 00:29:06.019 } 00:29:06.019 08:29:10 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:06.019 08:29:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:06.278 08:29:11 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:29:06.278 08:29:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:06.278 08:29:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.278 08:29:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:06.278 08:29:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:06.278 08:29:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.537 08:29:11 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:06.537 08:29:11 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:29:06.537 08:29:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:06.537 08:29:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:06.537 08:29:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.537 08:29:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:06.537 08:29:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.797 08:29:11 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:29:06.797 08:29:11 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:06.797 08:29:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:06.797 08:29:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:06.797 08:29:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:06.797 08:29:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.797 08:29:11 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:06.797 08:29:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.797 08:29:11 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:06.797 08:29:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:06.797 [2024-11-17 08:29:11.779533] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:06.797 [2024-11-17 08:29:11.779652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:29:06.797 [2024-11-17 08:29:11.780576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:29:06.797 [2024-11-17 08:29:11.781567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:29:06.797 [2024-11-17 08:29:11.781611] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:06.797 [2024-11-17 08:29:11.781627] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:06.797 [2024-11-17 08:29:11.781643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:29:06.797 request: 00:29:06.797 { 00:29:06.797 "name": "nvme0", 00:29:06.797 "trtype": "tcp", 00:29:06.797 "traddr": "127.0.0.1", 00:29:06.797 "adrfam": "ipv4", 00:29:06.797 "trsvcid": "4420", 00:29:06.797 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:06.797 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:06.797 "prchk_reftag": false, 00:29:06.797 "prchk_guard": false, 00:29:06.797 "hdgst": false, 00:29:06.797 "ddgst": false, 00:29:06.797 "psk": "key1", 00:29:06.797 "allow_unrecognized_csi": false, 00:29:06.797 "method": "bdev_nvme_attach_controller", 00:29:06.797 "req_id": 1 00:29:06.797 } 00:29:06.797 Got JSON-RPC error response 00:29:06.797 response: 00:29:06.797 { 00:29:06.797 "code": -5, 00:29:06.797 "message": "Input/output error" 00:29:06.797 } 00:29:06.797 08:29:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:06.797 08:29:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:06.797 08:29:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:06.797 08:29:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:06.797 08:29:11 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:29:06.797 08:29:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:06.797 08:29:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.056 08:29:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.056 08:29:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:07.056 08:29:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.056 08:29:11 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:07.056 08:29:12 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:29:07.056 08:29:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:07.056 08:29:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.056 08:29:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.056 08:29:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.056 08:29:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:07.316 08:29:12 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:29:07.316 08:29:12 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:29:07.316 08:29:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:07.575 08:29:12 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:29:07.575 08:29:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:07.834 08:29:12 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:29:07.834 08:29:12 keyring_file -- keyring/file.sh@78 -- # jq length 00:29:07.834 08:29:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.834 08:29:12 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:29:07.834 08:29:12 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.lUfULGJ1Z5 00:29:07.834 08:29:12 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.lUfULGJ1Z5 00:29:07.834 08:29:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:07.834 08:29:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.lUfULGJ1Z5 00:29:07.834 08:29:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:08.093 08:29:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.093 08:29:12 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:08.093 08:29:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.093 08:29:12 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lUfULGJ1Z5 00:29:08.093 08:29:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lUfULGJ1Z5 00:29:08.093 [2024-11-17 08:29:13.016524] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lUfULGJ1Z5': 0100660 00:29:08.093 [2024-11-17 08:29:13.016574] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:08.093 request: 00:29:08.093 { 00:29:08.093 "name": "key0", 00:29:08.093 "path": "/tmp/tmp.lUfULGJ1Z5", 00:29:08.093 "method": "keyring_file_add_key", 00:29:08.093 "req_id": 1 00:29:08.093 } 00:29:08.093 Got JSON-RPC error response 00:29:08.093 response: 00:29:08.093 { 00:29:08.093 "code": -1, 00:29:08.093 "message": "Operation not permitted" 00:29:08.093 } 00:29:08.093 08:29:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:08.093 08:29:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.093 08:29:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.093 08:29:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.093 08:29:13 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.lUfULGJ1Z5 00:29:08.093 08:29:13 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lUfULGJ1Z5 00:29:08.093 08:29:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lUfULGJ1Z5 00:29:08.353 08:29:13 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.lUfULGJ1Z5 00:29:08.353 08:29:13 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:29:08.353 08:29:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:08.353 08:29:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:08.353 08:29:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.353 08:29:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.353 08:29:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:08.612 08:29:13 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:29:08.612 08:29:13 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:08.612 08:29:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:08.612 08:29:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:08.612 08:29:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:08.612 08:29:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.612 08:29:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:08.612 08:29:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.612 08:29:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:08.612 08:29:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:08.871 [2024-11-17 08:29:13.667707] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.lUfULGJ1Z5': No such file or directory 00:29:08.871 [2024-11-17 08:29:13.667781] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:08.871 [2024-11-17 08:29:13.667807] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:08.871 [2024-11-17 08:29:13.667821] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:29:08.871 [2024-11-17 08:29:13.667848] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:08.871 [2024-11-17 08:29:13.667861] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:08.871 request: 00:29:08.871 { 00:29:08.871 "name": "nvme0", 00:29:08.871 "trtype": "tcp", 00:29:08.871 "traddr": "127.0.0.1", 00:29:08.871 "adrfam": "ipv4", 00:29:08.871 "trsvcid": "4420", 00:29:08.871 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.871 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:08.871 "prchk_reftag": false, 00:29:08.871 "prchk_guard": false, 00:29:08.871 "hdgst": false, 00:29:08.871 "ddgst": false, 00:29:08.871 "psk": "key0", 00:29:08.871 "allow_unrecognized_csi": false, 00:29:08.871 "method": "bdev_nvme_attach_controller", 00:29:08.871 "req_id": 1 00:29:08.871 } 00:29:08.871 Got JSON-RPC error response 00:29:08.871 response: 00:29:08.871 { 00:29:08.871 "code": -19, 00:29:08.871 "message": "No such device" 00:29:08.871 } 00:29:08.871 08:29:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:08.871 08:29:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.871 08:29:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.871 08:29:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.871 08:29:13 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:29:08.871 08:29:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:09.131 08:29:13 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:09.131 08:29:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:09.131 08:29:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:09.131 08:29:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:09.131 08:29:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:09.131 08:29:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:09.131 08:29:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UvkigBVawB 00:29:09.131 08:29:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:09.131 08:29:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:09.131 08:29:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:29:09.131 08:29:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:09.131 08:29:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:09.131 08:29:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:29:09.131 08:29:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:29:09.131 08:29:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UvkigBVawB 00:29:09.131 08:29:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UvkigBVawB 00:29:09.131 08:29:13 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.UvkigBVawB 00:29:09.131 08:29:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UvkigBVawB 00:29:09.131 08:29:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UvkigBVawB 00:29:09.390 08:29:14 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:09.390 08:29:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:09.649 nvme0n1 00:29:09.649 08:29:14 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:29:09.649 08:29:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:09.649 08:29:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.649 08:29:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.649 08:29:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.649 08:29:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:09.649 08:29:14 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:29:09.649 08:29:14 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:29:09.649 08:29:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:09.909 08:29:14 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:29:09.909 08:29:14 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:29:09.909 08:29:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.909 08:29:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:09.909 08:29:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.167 08:29:15 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:29:10.167 08:29:15 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:29:10.167 08:29:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:10.167 08:29:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.167 08:29:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.167 08:29:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.167 08:29:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:10.426 08:29:15 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:29:10.426 08:29:15 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:10.426 08:29:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:10.685 08:29:15 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:29:10.685 08:29:15 keyring_file -- keyring/file.sh@105 -- # jq length 00:29:10.685 08:29:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.944 08:29:15 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:29:10.944 08:29:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UvkigBVawB 00:29:10.944 08:29:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UvkigBVawB 00:29:10.944 08:29:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rzdfFh7EdG 00:29:10.944 08:29:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rzdfFh7EdG 00:29:11.203 08:29:16 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:11.203 08:29:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:11.462 nvme0n1 00:29:11.462 08:29:16 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:29:11.462 08:29:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:11.721 08:29:16 keyring_file -- keyring/file.sh@113 -- # config='{ 00:29:11.721 "subsystems": [ 00:29:11.721 { 00:29:11.721 "subsystem": "keyring", 00:29:11.721 "config": [ 00:29:11.721 { 00:29:11.721 "method": "keyring_file_add_key", 00:29:11.721 "params": { 00:29:11.721 "name": "key0", 00:29:11.721 "path": "/tmp/tmp.UvkigBVawB" 00:29:11.721 } 00:29:11.721 }, 00:29:11.721 { 00:29:11.721 "method": "keyring_file_add_key", 00:29:11.721 "params": { 00:29:11.721 "name": "key1", 00:29:11.721 "path": "/tmp/tmp.rzdfFh7EdG" 00:29:11.721 } 00:29:11.721 } 00:29:11.721 ] 00:29:11.721 }, 00:29:11.721 { 00:29:11.721 "subsystem": "iobuf", 00:29:11.721 "config": [ 00:29:11.721 { 00:29:11.721 "method": "iobuf_set_options", 00:29:11.721 "params": { 00:29:11.721 "small_pool_count": 8192, 00:29:11.721 "large_pool_count": 1024, 00:29:11.721 "small_bufsize": 8192, 00:29:11.721 "large_bufsize": 135168, 00:29:11.721 "enable_numa": false 00:29:11.721 } 00:29:11.721 } 00:29:11.721 ] 00:29:11.721 }, 00:29:11.721 { 00:29:11.721 "subsystem": "sock", 00:29:11.721 "config": [ 00:29:11.722 { 00:29:11.722 "method": "sock_set_default_impl", 00:29:11.722 "params": { 00:29:11.722 "impl_name": "uring" 00:29:11.722 } 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "method": "sock_impl_set_options", 00:29:11.722 "params": { 00:29:11.722 "impl_name": "ssl", 00:29:11.722 "recv_buf_size": 4096, 00:29:11.722 "send_buf_size": 4096, 00:29:11.722 "enable_recv_pipe": true, 00:29:11.722 "enable_quickack": false, 00:29:11.722 "enable_placement_id": 0, 00:29:11.722 "enable_zerocopy_send_server": true, 00:29:11.722 "enable_zerocopy_send_client": false, 00:29:11.722 "zerocopy_threshold": 0, 00:29:11.722 "tls_version": 0, 00:29:11.722 "enable_ktls": false 00:29:11.722 } 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "method": "sock_impl_set_options", 00:29:11.722 "params": { 00:29:11.722 "impl_name": "posix", 00:29:11.722 "recv_buf_size": 2097152, 00:29:11.722 "send_buf_size": 2097152, 00:29:11.722 "enable_recv_pipe": true, 00:29:11.722 "enable_quickack": false, 00:29:11.722 "enable_placement_id": 0, 00:29:11.722 "enable_zerocopy_send_server": true, 00:29:11.722 "enable_zerocopy_send_client": false, 00:29:11.722 "zerocopy_threshold": 0, 00:29:11.722 "tls_version": 0, 00:29:11.722 "enable_ktls": false 00:29:11.722 } 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "method": "sock_impl_set_options", 00:29:11.722 "params": { 00:29:11.722 "impl_name": "uring", 00:29:11.722 "recv_buf_size": 2097152, 00:29:11.722 "send_buf_size": 2097152, 00:29:11.722 "enable_recv_pipe": true, 00:29:11.722 "enable_quickack": false, 00:29:11.722 "enable_placement_id": 0, 00:29:11.722 "enable_zerocopy_send_server": false, 00:29:11.722 "enable_zerocopy_send_client": false, 00:29:11.722 "zerocopy_threshold": 0, 00:29:11.722 "tls_version": 0, 00:29:11.722 "enable_ktls": false 00:29:11.722 } 00:29:11.722 } 00:29:11.722 ] 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "subsystem": "vmd", 00:29:11.722 "config": [] 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "subsystem": "accel", 00:29:11.722 "config": [ 00:29:11.722 { 00:29:11.722 "method": "accel_set_options", 00:29:11.722 "params": { 00:29:11.722 "small_cache_size": 128, 00:29:11.722 "large_cache_size": 16, 00:29:11.722 "task_count": 2048, 00:29:11.722 "sequence_count": 2048, 00:29:11.722 "buf_count": 2048 00:29:11.722 } 00:29:11.722 } 00:29:11.722 ] 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "subsystem": "bdev", 00:29:11.722 "config": [ 00:29:11.722 { 00:29:11.722 "method": "bdev_set_options", 00:29:11.722 "params": { 00:29:11.722 "bdev_io_pool_size": 65535, 00:29:11.722 "bdev_io_cache_size": 256, 00:29:11.722 "bdev_auto_examine": true, 00:29:11.722 "iobuf_small_cache_size": 128, 00:29:11.722 "iobuf_large_cache_size": 16 00:29:11.722 } 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "method": "bdev_raid_set_options", 00:29:11.722 "params": { 00:29:11.722 "process_window_size_kb": 1024, 00:29:11.722 "process_max_bandwidth_mb_sec": 0 00:29:11.722 } 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "method": "bdev_iscsi_set_options", 00:29:11.722 "params": { 00:29:11.722 "timeout_sec": 30 00:29:11.722 } 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "method": "bdev_nvme_set_options", 00:29:11.722 "params": { 00:29:11.722 "action_on_timeout": "none", 00:29:11.722 "timeout_us": 0, 00:29:11.722 "timeout_admin_us": 0, 00:29:11.722 "keep_alive_timeout_ms": 10000, 00:29:11.722 "arbitration_burst": 0, 00:29:11.722 "low_priority_weight": 0, 00:29:11.722 "medium_priority_weight": 0, 00:29:11.722 "high_priority_weight": 0, 00:29:11.722 "nvme_adminq_poll_period_us": 10000, 00:29:11.722 "nvme_ioq_poll_period_us": 0, 00:29:11.722 "io_queue_requests": 512, 00:29:11.722 "delay_cmd_submit": true, 00:29:11.722 "transport_retry_count": 4, 00:29:11.722 "bdev_retry_count": 3, 00:29:11.722 "transport_ack_timeout": 0, 00:29:11.722 "ctrlr_loss_timeout_sec": 0, 00:29:11.722 "reconnect_delay_sec": 0, 00:29:11.722 "fast_io_fail_timeout_sec": 0, 00:29:11.722 "disable_auto_failback": false, 00:29:11.722 "generate_uuids": false, 00:29:11.722 "transport_tos": 0, 00:29:11.722 "nvme_error_stat": false, 00:29:11.722 "rdma_srq_size": 0, 00:29:11.722 "io_path_stat": false, 00:29:11.722 "allow_accel_sequence": false, 00:29:11.722 "rdma_max_cq_size": 0, 00:29:11.722 "rdma_cm_event_timeout_ms": 0, 00:29:11.722 "dhchap_digests": [ 00:29:11.722 "sha256", 00:29:11.722 "sha384", 00:29:11.722 "sha512" 00:29:11.722 ], 00:29:11.722 "dhchap_dhgroups": [ 00:29:11.722 "null", 00:29:11.722 "ffdhe2048", 00:29:11.722 "ffdhe3072", 00:29:11.722 "ffdhe4096", 00:29:11.722 "ffdhe6144", 00:29:11.722 "ffdhe8192" 00:29:11.722 ] 00:29:11.722 } 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "method": "bdev_nvme_attach_controller", 00:29:11.722 "params": { 00:29:11.722 "name": "nvme0", 00:29:11.722 "trtype": "TCP", 00:29:11.722 "adrfam": "IPv4", 00:29:11.722 "traddr": "127.0.0.1", 00:29:11.722 "trsvcid": "4420", 00:29:11.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:11.722 "prchk_reftag": false, 00:29:11.722 "prchk_guard": false, 00:29:11.722 "ctrlr_loss_timeout_sec": 0, 00:29:11.722 "reconnect_delay_sec": 0, 00:29:11.722 "fast_io_fail_timeout_sec": 0, 00:29:11.722 "psk": "key0", 00:29:11.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:11.722 "hdgst": false, 00:29:11.722 "ddgst": false, 00:29:11.722 "multipath": "multipath" 00:29:11.722 } 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "method": "bdev_nvme_set_hotplug", 00:29:11.722 "params": { 00:29:11.722 "period_us": 100000, 00:29:11.722 "enable": false 00:29:11.722 } 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "method": "bdev_wait_for_examine" 00:29:11.722 } 00:29:11.722 ] 00:29:11.722 }, 00:29:11.722 { 00:29:11.722 "subsystem": "nbd", 00:29:11.722 "config": [] 00:29:11.722 } 00:29:11.722 ] 00:29:11.722 }' 00:29:11.722 08:29:16 keyring_file -- keyring/file.sh@115 -- # killprocess 92265 00:29:11.722 08:29:16 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 92265 ']' 00:29:11.722 08:29:16 keyring_file -- common/autotest_common.sh@958 -- # kill -0 92265 00:29:11.722 08:29:16 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:11.722 08:29:16 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.722 08:29:16 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92265 00:29:11.722 08:29:16 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:11.722 08:29:16 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:11.722 killing process with pid 92265 00:29:11.722 08:29:16 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92265' 00:29:11.722 08:29:16 keyring_file -- common/autotest_common.sh@973 -- # kill 92265 00:29:11.722 Received shutdown signal, test time was about 1.000000 seconds 00:29:11.722 00:29:11.722 Latency(us) 00:29:11.722 [2024-11-17T08:29:16.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.722 [2024-11-17T08:29:16.734Z] =================================================================================================================== 00:29:11.722 [2024-11-17T08:29:16.734Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.722 08:29:16 keyring_file -- common/autotest_common.sh@978 -- # wait 92265 00:29:13.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:13.102 08:29:17 keyring_file -- keyring/file.sh@118 -- # bperfpid=92506 00:29:13.102 08:29:17 keyring_file -- keyring/file.sh@120 -- # waitforlisten 92506 /var/tmp/bperf.sock 00:29:13.102 08:29:17 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 92506 ']' 00:29:13.102 08:29:17 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:13.102 08:29:17 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.102 08:29:17 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:13.102 08:29:17 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.102 08:29:17 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:13.102 08:29:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:13.102 08:29:17 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:29:13.102 "subsystems": [ 00:29:13.102 { 00:29:13.102 "subsystem": "keyring", 00:29:13.102 "config": [ 00:29:13.102 { 00:29:13.102 "method": "keyring_file_add_key", 00:29:13.102 "params": { 00:29:13.102 "name": "key0", 00:29:13.102 "path": "/tmp/tmp.UvkigBVawB" 00:29:13.102 } 00:29:13.102 }, 00:29:13.102 { 00:29:13.102 "method": "keyring_file_add_key", 00:29:13.102 "params": { 00:29:13.102 "name": "key1", 00:29:13.102 "path": "/tmp/tmp.rzdfFh7EdG" 00:29:13.102 } 00:29:13.102 } 00:29:13.102 ] 00:29:13.102 }, 00:29:13.102 { 00:29:13.102 "subsystem": "iobuf", 00:29:13.102 "config": [ 00:29:13.102 { 00:29:13.102 "method": "iobuf_set_options", 00:29:13.102 "params": { 00:29:13.102 "small_pool_count": 8192, 00:29:13.102 "large_pool_count": 1024, 00:29:13.102 "small_bufsize": 8192, 00:29:13.102 "large_bufsize": 135168, 00:29:13.102 "enable_numa": false 00:29:13.102 } 00:29:13.102 } 00:29:13.102 ] 00:29:13.102 }, 00:29:13.102 { 00:29:13.102 "subsystem": "sock", 00:29:13.102 "config": [ 00:29:13.102 { 00:29:13.102 "method": "sock_set_default_impl", 00:29:13.102 "params": { 00:29:13.102 "impl_name": "uring" 00:29:13.102 } 00:29:13.102 }, 00:29:13.102 { 00:29:13.102 "method": "sock_impl_set_options", 00:29:13.102 "params": { 00:29:13.102 "impl_name": "ssl", 00:29:13.102 "recv_buf_size": 4096, 00:29:13.102 "send_buf_size": 4096, 00:29:13.102 "enable_recv_pipe": true, 00:29:13.102 "enable_quickack": false, 00:29:13.102 "enable_placement_id": 0, 00:29:13.102 "enable_zerocopy_send_server": true, 00:29:13.102 "enable_zerocopy_send_client": false, 00:29:13.102 "zerocopy_threshold": 0, 00:29:13.102 "tls_version": 0, 00:29:13.102 "enable_ktls": false 00:29:13.102 } 00:29:13.102 }, 00:29:13.102 { 00:29:13.102 "method": "sock_impl_set_options", 00:29:13.102 "params": { 00:29:13.102 "impl_name": "posix", 00:29:13.102 "recv_buf_size": 2097152, 00:29:13.102 "send_buf_size": 2097152, 00:29:13.102 "enable_recv_pipe": true, 00:29:13.102 "enable_quickack": false, 00:29:13.102 "enable_placement_id": 0, 00:29:13.102 "enable_zerocopy_send_server": true, 00:29:13.102 "enable_zerocopy_send_client": false, 00:29:13.102 "zerocopy_threshold": 0, 00:29:13.102 "tls_version": 0, 00:29:13.102 "enable_ktls": false 00:29:13.102 } 00:29:13.102 }, 00:29:13.103 { 00:29:13.103 "method": "sock_impl_set_options", 00:29:13.103 "params": { 00:29:13.103 "impl_name": "uring", 00:29:13.103 "recv_buf_size": 2097152, 00:29:13.103 "send_buf_size": 2097152, 00:29:13.103 "enable_recv_pipe": true, 00:29:13.103 "enable_quickack": false, 00:29:13.103 "enable_placement_id": 0, 00:29:13.103 "enable_zerocopy_send_server": false, 00:29:13.103 "enable_zerocopy_send_client": false, 00:29:13.103 "zerocopy_threshold": 0, 00:29:13.103 "tls_version": 0, 00:29:13.103 "enable_ktls": false 00:29:13.103 } 00:29:13.103 } 00:29:13.103 ] 00:29:13.103 }, 00:29:13.103 { 00:29:13.103 "subsystem": "vmd", 00:29:13.103 "config": [] 00:29:13.103 }, 00:29:13.103 { 00:29:13.103 "subsystem": "accel", 00:29:13.103 "config": [ 00:29:13.103 { 00:29:13.103 "method": "accel_set_options", 00:29:13.103 "params": { 00:29:13.103 "small_cache_size": 128, 00:29:13.103 "large_cache_size": 16, 00:29:13.103 "task_count": 2048, 00:29:13.103 "sequence_count": 2048, 00:29:13.103 "buf_count": 2048 00:29:13.103 } 00:29:13.103 } 00:29:13.103 ] 00:29:13.103 }, 00:29:13.103 { 00:29:13.103 "subsystem": "bdev", 00:29:13.103 "config": [ 00:29:13.103 { 00:29:13.103 "method": "bdev_set_options", 00:29:13.103 "params": { 00:29:13.103 "bdev_io_pool_size": 65535, 00:29:13.103 "bdev_io_cache_size": 256, 00:29:13.103 "bdev_auto_examine": true, 00:29:13.103 "iobuf_small_cache_size": 128, 00:29:13.103 "iobuf_large_cache_size": 16 00:29:13.103 } 00:29:13.103 }, 00:29:13.103 { 00:29:13.103 "method": "bdev_raid_set_options", 00:29:13.103 "params": { 00:29:13.103 "process_window_size_kb": 1024, 00:29:13.103 "process_max_bandwidth_mb_sec": 0 00:29:13.103 } 00:29:13.103 }, 00:29:13.103 { 00:29:13.103 "method": "bdev_iscsi_set_options", 00:29:13.103 "params": { 00:29:13.103 "timeout_sec": 30 00:29:13.103 } 00:29:13.103 }, 00:29:13.103 { 00:29:13.103 "method": "bdev_nvme_set_options", 00:29:13.103 "params": { 00:29:13.103 "action_on_timeout": "none", 00:29:13.103 "timeout_us": 0, 00:29:13.103 "timeout_admin_us": 0, 00:29:13.103 "keep_alive_timeout_ms": 10000, 00:29:13.103 "arbitration_burst": 0, 00:29:13.103 "low_priority_weight": 0, 00:29:13.103 "medium_priority_weight": 0, 00:29:13.103 "high_priority_weight": 0, 00:29:13.103 "nvme_adminq_poll_period_us": 10000, 00:29:13.103 "nvme_ioq_poll_period_us": 0, 00:29:13.103 "io_queue_requests": 512, 00:29:13.103 "delay_cmd_submit": true, 00:29:13.103 "transport_retry_count": 4, 00:29:13.103 "bdev_retry_count": 3, 00:29:13.103 "transport_ack_timeout": 0, 00:29:13.103 "ctrlr_loss_timeout_sec": 0, 00:29:13.103 "reconnect_delay_sec": 0, 00:29:13.103 "fast_io_fail_timeout_sec": 0, 00:29:13.103 "disable_auto_failback": false, 00:29:13.103 "generate_uuids": false, 00:29:13.103 "transport_tos": 0, 00:29:13.103 "nvme_error_stat": false, 00:29:13.103 "rdma_srq_size": 0, 00:29:13.103 "io_path_stat": false, 00:29:13.103 "allow_accel_sequence": false, 00:29:13.103 "rdma_max_cq_size": 0, 00:29:13.103 "rdma_cm_event_timeout_ms": 0, 00:29:13.103 "dhchap_digests": [ 00:29:13.103 "sha256", 00:29:13.103 "sha384", 00:29:13.103 "sha512" 00:29:13.103 ], 00:29:13.103 "dhchap_dhgroups": [ 00:29:13.103 "null", 00:29:13.103 "ffdhe2048", 00:29:13.103 "ffdhe3072", 00:29:13.103 "ffdhe4096", 00:29:13.103 "ffdhe6144", 00:29:13.103 "ffdhe8192" 00:29:13.103 ] 00:29:13.103 } 00:29:13.103 }, 00:29:13.103 { 00:29:13.103 "method": "bdev_nvme_attach_controller", 00:29:13.103 "params": { 00:29:13.103 "name": "nvme0", 00:29:13.103 "trtype": "TCP", 00:29:13.103 "adrfam": "IPv4", 00:29:13.103 "traddr": "127.0.0.1", 00:29:13.103 "trsvcid": "4420", 00:29:13.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:13.103 "prchk_reftag": false, 00:29:13.103 "prchk_guard": false, 00:29:13.103 "ctrlr_loss_timeout_sec": 0, 00:29:13.103 "reconnect_delay_sec": 0, 00:29:13.103 "fast_io_fail_timeout_sec": 0, 00:29:13.103 "psk": "key0", 00:29:13.103 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:13.103 "hdgst": false, 00:29:13.103 "ddgst": false, 00:29:13.103 "multipath": "multipath" 00:29:13.103 } 00:29:13.103 }, 00:29:13.103 { 00:29:13.103 "method": "bdev_nvme_set_hotplug", 00:29:13.103 "params": { 00:29:13.103 "period_us": 100000, 00:29:13.103 "enable": false 00:29:13.103 } 00:29:13.103 }, 00:29:13.103 { 00:29:13.103 "method": "bdev_wait_for_examine" 00:29:13.103 } 00:29:13.103 ] 00:29:13.103 }, 00:29:13.103 { 00:29:13.103 "subsystem": "nbd", 00:29:13.103 "config": [] 00:29:13.103 } 00:29:13.103 ] 00:29:13.103 }' 00:29:13.103 [2024-11-17 08:29:17.855117] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:13.103 [2024-11-17 08:29:17.855237] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92506 ] 00:29:13.103 [2024-11-17 08:29:18.041845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.362 [2024-11-17 08:29:18.168742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.621 [2024-11-17 08:29:18.483282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:13.881 [2024-11-17 08:29:18.643714] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:13.881 08:29:18 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.881 08:29:18 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:13.881 08:29:18 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:29:13.881 08:29:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.881 08:29:18 keyring_file -- keyring/file.sh@121 -- # jq length 00:29:14.141 08:29:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:14.141 08:29:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:29:14.141 08:29:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:14.141 08:29:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:14.141 08:29:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.141 08:29:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.141 08:29:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.399 08:29:19 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:29:14.399 08:29:19 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:29:14.399 08:29:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:14.399 08:29:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:14.399 08:29:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:14.399 08:29:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.399 08:29:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.399 08:29:19 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:29:14.399 08:29:19 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:29:14.399 08:29:19 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:29:14.399 08:29:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:14.658 08:29:19 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:29:14.658 08:29:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:14.658 08:29:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.UvkigBVawB /tmp/tmp.rzdfFh7EdG 00:29:14.658 08:29:19 keyring_file -- keyring/file.sh@20 -- # killprocess 92506 00:29:14.658 08:29:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 92506 ']' 00:29:14.658 08:29:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 92506 00:29:14.658 08:29:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:14.658 08:29:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.658 08:29:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92506 00:29:14.658 08:29:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:14.658 08:29:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:14.658 killing process with pid 92506 00:29:14.658 08:29:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92506' 00:29:14.658 08:29:19 keyring_file -- common/autotest_common.sh@973 -- # kill 92506 00:29:14.658 Received shutdown signal, test time was about 1.000000 seconds 00:29:14.658 00:29:14.658 Latency(us) 00:29:14.658 [2024-11-17T08:29:19.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.658 [2024-11-17T08:29:19.670Z] =================================================================================================================== 00:29:14.658 [2024-11-17T08:29:19.670Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:14.658 08:29:19 keyring_file -- common/autotest_common.sh@978 -- # wait 92506 00:29:16.037 08:29:20 keyring_file -- keyring/file.sh@21 -- # killprocess 92242 00:29:16.037 08:29:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 92242 ']' 00:29:16.037 08:29:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 92242 00:29:16.037 08:29:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:16.037 08:29:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.037 08:29:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92242 00:29:16.037 08:29:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:16.037 08:29:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:16.037 killing process with pid 92242 00:29:16.037 08:29:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92242' 00:29:16.037 08:29:20 keyring_file -- common/autotest_common.sh@973 -- # kill 92242 00:29:16.037 08:29:20 keyring_file -- common/autotest_common.sh@978 -- # wait 92242 00:29:18.576 00:29:18.576 real 0m18.206s 00:29:18.576 user 0m38.077s 00:29:18.576 sys 0m4.266s 00:29:18.576 08:29:23 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.576 08:29:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:18.576 ************************************ 00:29:18.576 END TEST keyring_file 00:29:18.576 ************************************ 00:29:18.576 08:29:23 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:29:18.576 08:29:23 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:18.576 08:29:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:18.576 08:29:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.576 08:29:23 -- common/autotest_common.sh@10 -- # set +x 00:29:18.576 ************************************ 00:29:18.576 START TEST keyring_linux 00:29:18.576 ************************************ 00:29:18.576 08:29:23 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:18.576 Joined session keyring: 684900202 00:29:18.576 * Looking for test storage... 00:29:18.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:18.576 08:29:23 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:18.576 08:29:23 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:29:18.576 08:29:23 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:18.576 08:29:23 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@345 -- # : 1 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.576 08:29:23 keyring_linux -- scripts/common.sh@368 -- # return 0 00:29:18.576 08:29:23 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.576 08:29:23 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:18.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.576 --rc genhtml_branch_coverage=1 00:29:18.576 --rc genhtml_function_coverage=1 00:29:18.576 --rc genhtml_legend=1 00:29:18.576 --rc geninfo_all_blocks=1 00:29:18.576 --rc geninfo_unexecuted_blocks=1 00:29:18.576 00:29:18.576 ' 00:29:18.576 08:29:23 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:18.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.576 --rc genhtml_branch_coverage=1 00:29:18.576 --rc genhtml_function_coverage=1 00:29:18.576 --rc genhtml_legend=1 00:29:18.576 --rc geninfo_all_blocks=1 00:29:18.576 --rc geninfo_unexecuted_blocks=1 00:29:18.576 00:29:18.576 ' 00:29:18.576 08:29:23 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:18.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.576 --rc genhtml_branch_coverage=1 00:29:18.576 --rc genhtml_function_coverage=1 00:29:18.576 --rc genhtml_legend=1 00:29:18.576 --rc geninfo_all_blocks=1 00:29:18.576 --rc geninfo_unexecuted_blocks=1 00:29:18.576 00:29:18.576 ' 00:29:18.576 08:29:23 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:18.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.576 --rc genhtml_branch_coverage=1 00:29:18.576 --rc genhtml_function_coverage=1 00:29:18.576 --rc genhtml_legend=1 00:29:18.576 --rc geninfo_all_blocks=1 00:29:18.576 --rc geninfo_unexecuted_blocks=1 00:29:18.576 00:29:18.576 ' 00:29:18.576 08:29:23 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:18.576 08:29:23 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:18.576 08:29:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=94e5d28e-763b-4bd1-8f77-f8a94221cfa8 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:18.577 08:29:23 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.577 08:29:23 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.577 08:29:23 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.577 08:29:23 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.577 08:29:23 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.577 08:29:23 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.577 08:29:23 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.577 08:29:23 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:18.577 08:29:23 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:18.577 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:18.577 08:29:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:18.577 08:29:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:18.577 08:29:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:18.577 08:29:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:18.577 08:29:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:18.577 08:29:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:18.577 /tmp/:spdk-test:key0 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:18.577 08:29:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:18.577 08:29:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:29:18.577 08:29:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:29:18.837 08:29:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:18.837 /tmp/:spdk-test:key1 00:29:18.837 08:29:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:18.837 08:29:23 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=92663 00:29:18.837 08:29:23 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:18.837 08:29:23 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 92663 00:29:18.837 08:29:23 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 92663 ']' 00:29:18.837 08:29:23 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.837 08:29:23 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.837 08:29:23 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.837 08:29:23 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.837 08:29:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:18.837 [2024-11-17 08:29:23.747873] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:18.837 [2024-11-17 08:29:23.748035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92663 ] 00:29:19.096 [2024-11-17 08:29:23.933814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.096 [2024-11-17 08:29:24.038006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.356 [2024-11-17 08:29:24.261393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:19.925 08:29:24 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.925 08:29:24 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:19.925 08:29:24 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:19.925 08:29:24 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.925 08:29:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:19.925 [2024-11-17 08:29:24.860963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.925 null0 00:29:19.925 [2024-11-17 08:29:24.892897] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:19.925 [2024-11-17 08:29:24.893169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:19.925 08:29:24 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.925 08:29:24 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:19.925 420980956 00:29:19.925 08:29:24 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:19.925 971481982 00:29:19.925 08:29:24 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=92681 00:29:19.925 08:29:24 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:19.925 08:29:24 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 92681 /var/tmp/bperf.sock 00:29:19.925 08:29:24 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 92681 ']' 00:29:19.925 08:29:24 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.925 08:29:24 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.925 08:29:24 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.925 08:29:24 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.925 08:29:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:20.184 [2024-11-17 08:29:25.026825] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:20.184 [2024-11-17 08:29:25.026951] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92681 ] 00:29:20.444 [2024-11-17 08:29:25.212080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.444 [2024-11-17 08:29:25.335848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.013 08:29:25 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.013 08:29:25 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:21.013 08:29:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:21.013 08:29:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:21.272 08:29:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:21.272 08:29:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:21.532 [2024-11-17 08:29:26.431035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:21.791 08:29:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:21.791 08:29:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:21.791 [2024-11-17 08:29:26.778804] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:22.051 nvme0n1 00:29:22.051 08:29:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:22.051 08:29:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:22.051 08:29:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:22.051 08:29:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:22.051 08:29:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:22.051 08:29:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:22.310 08:29:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:22.310 08:29:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:22.310 08:29:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:22.310 08:29:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:22.310 08:29:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:22.310 08:29:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:22.310 08:29:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:22.310 08:29:27 keyring_linux -- keyring/linux.sh@25 -- # sn=420980956 00:29:22.310 08:29:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:22.310 08:29:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:22.570 08:29:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 420980956 == \4\2\0\9\8\0\9\5\6 ]] 00:29:22.570 08:29:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 420980956 00:29:22.570 08:29:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:22.570 08:29:27 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.570 Running I/O for 1 seconds... 00:29:23.508 10793.00 IOPS, 42.16 MiB/s 00:29:23.508 Latency(us) 00:29:23.508 [2024-11-17T08:29:28.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.508 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:23.508 nvme0n1 : 1.01 10797.54 42.18 0.00 0.00 11782.71 4684.90 16318.20 00:29:23.508 [2024-11-17T08:29:28.520Z] =================================================================================================================== 00:29:23.508 [2024-11-17T08:29:28.520Z] Total : 10797.54 42.18 0.00 0.00 11782.71 4684.90 16318.20 00:29:23.508 { 00:29:23.508 "results": [ 00:29:23.508 { 00:29:23.508 "job": "nvme0n1", 00:29:23.508 "core_mask": "0x2", 00:29:23.508 "workload": "randread", 00:29:23.508 "status": "finished", 00:29:23.508 "queue_depth": 128, 00:29:23.508 "io_size": 4096, 00:29:23.508 "runtime": 1.011434, 00:29:23.508 "iops": 10797.54091715327, 00:29:23.508 "mibps": 42.17789420762996, 00:29:23.508 "io_failed": 0, 00:29:23.508 "io_timeout": 0, 00:29:23.508 "avg_latency_us": 11782.712357055729, 00:29:23.508 "min_latency_us": 4684.90281124498, 00:29:23.508 "max_latency_us": 16318.20080321285 00:29:23.508 } 00:29:23.508 ], 00:29:23.508 "core_count": 1 00:29:23.508 } 00:29:23.508 08:29:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:23.508 08:29:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:23.767 08:29:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:23.767 08:29:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:23.767 08:29:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:23.767 08:29:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:23.767 08:29:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:23.767 08:29:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:24.027 08:29:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:24.027 08:29:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:24.027 08:29:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:24.027 08:29:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:24.027 08:29:28 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:29:24.027 08:29:28 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:24.027 08:29:28 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:24.027 08:29:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.027 08:29:28 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:24.027 08:29:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:24.027 08:29:28 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:24.027 08:29:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:24.287 [2024-11-17 08:29:29.075565] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:24.287 [2024-11-17 08:29:29.076022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:29:24.287 [2024-11-17 08:29:29.076977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:29:24.287 [2024-11-17 08:29:29.077965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:29:24.287 [2024-11-17 08:29:29.078140] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:24.287 [2024-11-17 08:29:29.078316] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:24.287 [2024-11-17 08:29:29.078378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:29:24.287 request: 00:29:24.287 { 00:29:24.287 "name": "nvme0", 00:29:24.287 "trtype": "tcp", 00:29:24.287 "traddr": "127.0.0.1", 00:29:24.287 "adrfam": "ipv4", 00:29:24.287 "trsvcid": "4420", 00:29:24.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.287 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:24.287 "prchk_reftag": false, 00:29:24.287 "prchk_guard": false, 00:29:24.287 "hdgst": false, 00:29:24.287 "ddgst": false, 00:29:24.287 "psk": ":spdk-test:key1", 00:29:24.287 "allow_unrecognized_csi": false, 00:29:24.287 "method": "bdev_nvme_attach_controller", 00:29:24.287 "req_id": 1 00:29:24.287 } 00:29:24.287 Got JSON-RPC error response 00:29:24.287 response: 00:29:24.287 { 00:29:24.287 "code": -5, 00:29:24.287 "message": "Input/output error" 00:29:24.287 } 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@33 -- # sn=420980956 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 420980956 00:29:24.287 1 links removed 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@33 -- # sn=971481982 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 971481982 00:29:24.287 1 links removed 00:29:24.287 08:29:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 92681 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 92681 ']' 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 92681 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92681 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:24.287 killing process with pid 92681 00:29:24.287 Received shutdown signal, test time was about 1.000000 seconds 00:29:24.287 00:29:24.287 Latency(us) 00:29:24.287 [2024-11-17T08:29:29.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.287 [2024-11-17T08:29:29.299Z] =================================================================================================================== 00:29:24.287 [2024-11-17T08:29:29.299Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92681' 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 92681 00:29:24.287 08:29:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 92681 00:29:25.666 08:29:30 keyring_linux -- keyring/linux.sh@42 -- # killprocess 92663 00:29:25.666 08:29:30 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 92663 ']' 00:29:25.666 08:29:30 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 92663 00:29:25.666 08:29:30 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:25.666 08:29:30 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.666 08:29:30 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92663 00:29:25.666 killing process with pid 92663 00:29:25.666 08:29:30 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:25.666 08:29:30 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:25.666 08:29:30 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92663' 00:29:25.666 08:29:30 keyring_linux -- common/autotest_common.sh@973 -- # kill 92663 00:29:25.666 08:29:30 keyring_linux -- common/autotest_common.sh@978 -- # wait 92663 00:29:27.574 00:29:27.574 real 0m9.292s 00:29:27.574 user 0m14.861s 00:29:27.574 sys 0m2.073s 00:29:27.574 08:29:32 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.574 ************************************ 00:29:27.574 08:29:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:27.574 END TEST keyring_linux 00:29:27.574 ************************************ 00:29:27.834 08:29:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:27.834 08:29:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:27.834 08:29:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:27.834 08:29:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:27.834 08:29:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:27.834 08:29:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:27.834 08:29:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:27.834 08:29:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:27.834 08:29:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:27.834 08:29:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:27.834 08:29:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:27.834 08:29:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:27.834 08:29:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:27.834 08:29:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:27.834 08:29:32 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:29:27.834 08:29:32 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:29:27.834 08:29:32 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:29:27.834 08:29:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.834 08:29:32 -- common/autotest_common.sh@10 -- # set +x 00:29:27.834 08:29:32 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:29:27.834 08:29:32 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:29:27.834 08:29:32 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:29:27.834 08:29:32 -- common/autotest_common.sh@10 -- # set +x 00:29:30.448 INFO: APP EXITING 00:29:30.448 INFO: killing all VMs 00:29:30.448 INFO: killing vhost app 00:29:30.449 INFO: EXIT DONE 00:29:31.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:31.388 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:31.388 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:32.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:32.328 Cleaning 00:29:32.328 Removing: /var/run/dpdk/spdk0/config 00:29:32.328 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:32.328 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:32.328 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:32.328 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:32.328 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:32.328 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:32.328 Removing: /var/run/dpdk/spdk1/config 00:29:32.328 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:32.328 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:32.328 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:32.328 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:32.328 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:32.328 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:32.328 Removing: /var/run/dpdk/spdk2/config 00:29:32.328 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:32.328 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:32.328 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:32.328 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:32.328 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:32.328 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:32.328 Removing: /var/run/dpdk/spdk3/config 00:29:32.328 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:32.328 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:32.328 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:32.328 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:32.328 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:32.328 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:32.328 Removing: /var/run/dpdk/spdk4/config 00:29:32.328 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:32.328 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:32.328 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:32.328 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:32.328 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:32.587 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:32.587 Removing: /dev/shm/nvmf_trace.0 00:29:32.587 Removing: /dev/shm/spdk_tgt_trace.pid57434 00:29:32.587 Removing: /var/run/dpdk/spdk0 00:29:32.587 Removing: /var/run/dpdk/spdk1 00:29:32.587 Removing: /var/run/dpdk/spdk2 00:29:32.587 Removing: /var/run/dpdk/spdk3 00:29:32.587 Removing: /var/run/dpdk/spdk4 00:29:32.587 Removing: /var/run/dpdk/spdk_pid57183 00:29:32.587 Removing: /var/run/dpdk/spdk_pid57434 00:29:32.587 Removing: /var/run/dpdk/spdk_pid57669 00:29:32.587 Removing: /var/run/dpdk/spdk_pid57773 00:29:32.587 Removing: /var/run/dpdk/spdk_pid57833 00:29:32.587 Removing: /var/run/dpdk/spdk_pid57968 00:29:32.587 Removing: /var/run/dpdk/spdk_pid57996 00:29:32.587 Removing: /var/run/dpdk/spdk_pid58162 00:29:32.587 Removing: /var/run/dpdk/spdk_pid58371 00:29:32.587 Removing: /var/run/dpdk/spdk_pid58545 00:29:32.587 Removing: /var/run/dpdk/spdk_pid58656 00:29:32.587 Removing: /var/run/dpdk/spdk_pid58769 00:29:32.587 Removing: /var/run/dpdk/spdk_pid58896 00:29:32.587 Removing: /var/run/dpdk/spdk_pid59010 00:29:32.587 Removing: /var/run/dpdk/spdk_pid59049 00:29:32.587 Removing: /var/run/dpdk/spdk_pid59086 00:29:32.587 Removing: /var/run/dpdk/spdk_pid59162 00:29:32.587 Removing: /var/run/dpdk/spdk_pid59268 00:29:32.587 Removing: /var/run/dpdk/spdk_pid59725 00:29:32.587 Removing: /var/run/dpdk/spdk_pid59806 00:29:32.587 Removing: /var/run/dpdk/spdk_pid59889 00:29:32.587 Removing: /var/run/dpdk/spdk_pid59905 00:29:32.587 Removing: /var/run/dpdk/spdk_pid60064 00:29:32.587 Removing: /var/run/dpdk/spdk_pid60080 00:29:32.587 Removing: /var/run/dpdk/spdk_pid60239 00:29:32.587 Removing: /var/run/dpdk/spdk_pid60261 00:29:32.587 Removing: /var/run/dpdk/spdk_pid60336 00:29:32.587 Removing: /var/run/dpdk/spdk_pid60359 00:29:32.587 Removing: /var/run/dpdk/spdk_pid60429 00:29:32.587 Removing: /var/run/dpdk/spdk_pid60447 00:29:32.587 Removing: /var/run/dpdk/spdk_pid60653 00:29:32.587 Removing: /var/run/dpdk/spdk_pid60695 00:29:32.587 Removing: /var/run/dpdk/spdk_pid60779 00:29:32.587 Removing: /var/run/dpdk/spdk_pid61147 00:29:32.587 Removing: /var/run/dpdk/spdk_pid61165 00:29:32.587 Removing: /var/run/dpdk/spdk_pid61208 00:29:32.587 Removing: /var/run/dpdk/spdk_pid61239 00:29:32.587 Removing: /var/run/dpdk/spdk_pid61272 00:29:32.587 Removing: /var/run/dpdk/spdk_pid61309 00:29:32.587 Removing: /var/run/dpdk/spdk_pid61340 00:29:32.587 Removing: /var/run/dpdk/spdk_pid61373 00:29:32.587 Removing: /var/run/dpdk/spdk_pid61404 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61435 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61468 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61499 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61530 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61563 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61600 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61631 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61658 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61695 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61726 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61759 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61807 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61838 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61875 00:29:32.846 Removing: /var/run/dpdk/spdk_pid61969 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62015 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62038 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62083 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62112 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62137 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62197 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62228 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62274 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62295 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62317 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62344 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62371 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62398 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62425 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62452 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62498 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62542 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62564 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62610 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62637 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62662 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62720 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62749 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62793 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62817 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62838 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62863 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62883 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62907 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62932 00:29:32.846 Removing: /var/run/dpdk/spdk_pid62956 00:29:32.846 Removing: /var/run/dpdk/spdk_pid63057 00:29:32.846 Removing: /var/run/dpdk/spdk_pid63150 00:29:32.846 Removing: /var/run/dpdk/spdk_pid63318 00:29:32.846 Removing: /var/run/dpdk/spdk_pid63371 00:29:32.846 Removing: /var/run/dpdk/spdk_pid63424 00:29:33.106 Removing: /var/run/dpdk/spdk_pid63457 00:29:33.106 Removing: /var/run/dpdk/spdk_pid63491 00:29:33.106 Removing: /var/run/dpdk/spdk_pid63523 00:29:33.106 Removing: /var/run/dpdk/spdk_pid63572 00:29:33.106 Removing: /var/run/dpdk/spdk_pid63605 00:29:33.106 Removing: /var/run/dpdk/spdk_pid63701 00:29:33.106 Removing: /var/run/dpdk/spdk_pid63751 00:29:33.106 Removing: /var/run/dpdk/spdk_pid63829 00:29:33.106 Removing: /var/run/dpdk/spdk_pid63962 00:29:33.106 Removing: /var/run/dpdk/spdk_pid64047 00:29:33.106 Removing: /var/run/dpdk/spdk_pid64110 00:29:33.106 Removing: /var/run/dpdk/spdk_pid64245 00:29:33.106 Removing: /var/run/dpdk/spdk_pid64309 00:29:33.106 Removing: /var/run/dpdk/spdk_pid64359 00:29:33.106 Removing: /var/run/dpdk/spdk_pid64615 00:29:33.106 Removing: /var/run/dpdk/spdk_pid64744 00:29:33.106 Removing: /var/run/dpdk/spdk_pid64790 00:29:33.106 Removing: /var/run/dpdk/spdk_pid64817 00:29:33.106 Removing: /var/run/dpdk/spdk_pid64872 00:29:33.106 Removing: /var/run/dpdk/spdk_pid64923 00:29:33.106 Removing: /var/run/dpdk/spdk_pid64976 00:29:33.106 Removing: /var/run/dpdk/spdk_pid65023 00:29:33.106 Removing: /var/run/dpdk/spdk_pid65445 00:29:33.106 Removing: /var/run/dpdk/spdk_pid65488 00:29:33.106 Removing: /var/run/dpdk/spdk_pid65865 00:29:33.106 Removing: /var/run/dpdk/spdk_pid66348 00:29:33.106 Removing: /var/run/dpdk/spdk_pid66614 00:29:33.106 Removing: /var/run/dpdk/spdk_pid67548 00:29:33.106 Removing: /var/run/dpdk/spdk_pid68510 00:29:33.106 Removing: /var/run/dpdk/spdk_pid68639 00:29:33.106 Removing: /var/run/dpdk/spdk_pid68719 00:29:33.106 Removing: /var/run/dpdk/spdk_pid70196 00:29:33.106 Removing: /var/run/dpdk/spdk_pid70582 00:29:33.106 Removing: /var/run/dpdk/spdk_pid73972 00:29:33.106 Removing: /var/run/dpdk/spdk_pid74371 00:29:33.106 Removing: /var/run/dpdk/spdk_pid74483 00:29:33.106 Removing: /var/run/dpdk/spdk_pid74624 00:29:33.106 Removing: /var/run/dpdk/spdk_pid74669 00:29:33.106 Removing: /var/run/dpdk/spdk_pid74705 00:29:33.106 Removing: /var/run/dpdk/spdk_pid74746 00:29:33.106 Removing: /var/run/dpdk/spdk_pid74864 00:29:33.106 Removing: /var/run/dpdk/spdk_pid75006 00:29:33.106 Removing: /var/run/dpdk/spdk_pid75190 00:29:33.106 Removing: /var/run/dpdk/spdk_pid75290 00:29:33.106 Removing: /var/run/dpdk/spdk_pid75503 00:29:33.106 Removing: /var/run/dpdk/spdk_pid75609 00:29:33.106 Removing: /var/run/dpdk/spdk_pid75716 00:29:33.106 Removing: /var/run/dpdk/spdk_pid76100 00:29:33.106 Removing: /var/run/dpdk/spdk_pid76548 00:29:33.106 Removing: /var/run/dpdk/spdk_pid76549 00:29:33.106 Removing: /var/run/dpdk/spdk_pid76550 00:29:33.364 Removing: /var/run/dpdk/spdk_pid76835 00:29:33.364 Removing: /var/run/dpdk/spdk_pid77126 00:29:33.364 Removing: /var/run/dpdk/spdk_pid77135 00:29:33.364 Removing: /var/run/dpdk/spdk_pid79554 00:29:33.364 Removing: /var/run/dpdk/spdk_pid79994 00:29:33.364 Removing: /var/run/dpdk/spdk_pid79998 00:29:33.364 Removing: /var/run/dpdk/spdk_pid80338 00:29:33.364 Removing: /var/run/dpdk/spdk_pid80359 00:29:33.364 Removing: /var/run/dpdk/spdk_pid80379 00:29:33.364 Removing: /var/run/dpdk/spdk_pid80413 00:29:33.364 Removing: /var/run/dpdk/spdk_pid80419 00:29:33.364 Removing: /var/run/dpdk/spdk_pid80513 00:29:33.364 Removing: /var/run/dpdk/spdk_pid80516 00:29:33.364 Removing: /var/run/dpdk/spdk_pid80625 00:29:33.364 Removing: /var/run/dpdk/spdk_pid80638 00:29:33.364 Removing: /var/run/dpdk/spdk_pid80743 00:29:33.364 Removing: /var/run/dpdk/spdk_pid80751 00:29:33.364 Removing: /var/run/dpdk/spdk_pid81205 00:29:33.364 Removing: /var/run/dpdk/spdk_pid81252 00:29:33.364 Removing: /var/run/dpdk/spdk_pid81348 00:29:33.364 Removing: /var/run/dpdk/spdk_pid81415 00:29:33.364 Removing: /var/run/dpdk/spdk_pid81798 00:29:33.364 Removing: /var/run/dpdk/spdk_pid82001 00:29:33.364 Removing: /var/run/dpdk/spdk_pid82451 00:29:33.364 Removing: /var/run/dpdk/spdk_pid83029 00:29:33.364 Removing: /var/run/dpdk/spdk_pid83874 00:29:33.364 Removing: /var/run/dpdk/spdk_pid84542 00:29:33.364 Removing: /var/run/dpdk/spdk_pid84545 00:29:33.365 Removing: /var/run/dpdk/spdk_pid86594 00:29:33.365 Removing: /var/run/dpdk/spdk_pid86661 00:29:33.365 Removing: /var/run/dpdk/spdk_pid86730 00:29:33.365 Removing: /var/run/dpdk/spdk_pid86801 00:29:33.365 Removing: /var/run/dpdk/spdk_pid86941 00:29:33.365 Removing: /var/run/dpdk/spdk_pid87002 00:29:33.365 Removing: /var/run/dpdk/spdk_pid87069 00:29:33.365 Removing: /var/run/dpdk/spdk_pid87130 00:29:33.365 Removing: /var/run/dpdk/spdk_pid87521 00:29:33.365 Removing: /var/run/dpdk/spdk_pid88745 00:29:33.365 Removing: /var/run/dpdk/spdk_pid88898 00:29:33.365 Removing: /var/run/dpdk/spdk_pid89148 00:29:33.365 Removing: /var/run/dpdk/spdk_pid89779 00:29:33.365 Removing: /var/run/dpdk/spdk_pid89943 00:29:33.365 Removing: /var/run/dpdk/spdk_pid90115 00:29:33.365 Removing: /var/run/dpdk/spdk_pid90216 00:29:33.365 Removing: /var/run/dpdk/spdk_pid90390 00:29:33.365 Removing: /var/run/dpdk/spdk_pid90509 00:29:33.365 Removing: /var/run/dpdk/spdk_pid91250 00:29:33.365 Removing: /var/run/dpdk/spdk_pid91292 00:29:33.365 Removing: /var/run/dpdk/spdk_pid91328 00:29:33.365 Removing: /var/run/dpdk/spdk_pid91694 00:29:33.365 Removing: /var/run/dpdk/spdk_pid91736 00:29:33.365 Removing: /var/run/dpdk/spdk_pid91767 00:29:33.624 Removing: /var/run/dpdk/spdk_pid92242 00:29:33.624 Removing: /var/run/dpdk/spdk_pid92265 00:29:33.624 Removing: /var/run/dpdk/spdk_pid92506 00:29:33.624 Removing: /var/run/dpdk/spdk_pid92663 00:29:33.624 Removing: /var/run/dpdk/spdk_pid92681 00:29:33.624 Clean 00:29:33.624 08:29:38 -- common/autotest_common.sh@1453 -- # return 0 00:29:33.624 08:29:38 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:29:33.624 08:29:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.624 08:29:38 -- common/autotest_common.sh@10 -- # set +x 00:29:33.624 08:29:38 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:29:33.624 08:29:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.624 08:29:38 -- common/autotest_common.sh@10 -- # set +x 00:29:33.624 08:29:38 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:33.624 08:29:38 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:33.624 08:29:38 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:33.624 08:29:38 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:29:33.624 08:29:38 -- spdk/autotest.sh@398 -- # hostname 00:29:33.624 08:29:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:33.883 geninfo: WARNING: invalid characters removed from testname! 00:30:00.441 08:30:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:03.733 08:30:08 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:05.637 08:30:10 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:08.176 08:30:12 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:10.085 08:30:14 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:12.624 08:30:17 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:14.533 08:30:19 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:14.533 08:30:19 -- spdk/autorun.sh@1 -- $ timing_finish 00:30:14.533 08:30:19 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:30:14.533 08:30:19 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:14.533 08:30:19 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:14.533 08:30:19 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:14.533 + [[ -n 5223 ]] 00:30:14.533 + sudo kill 5223 00:30:14.804 [Pipeline] } 00:30:14.820 [Pipeline] // timeout 00:30:14.827 [Pipeline] } 00:30:14.842 [Pipeline] // stage 00:30:14.848 [Pipeline] } 00:30:14.864 [Pipeline] // catchError 00:30:14.874 [Pipeline] stage 00:30:14.877 [Pipeline] { (Stop VM) 00:30:14.890 [Pipeline] sh 00:30:15.176 + vagrant halt 00:30:18.468 ==> default: Halting domain... 00:30:25.060 [Pipeline] sh 00:30:25.343 + vagrant destroy -f 00:30:27.952 ==> default: Removing domain... 00:30:27.974 [Pipeline] sh 00:30:28.258 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:30:28.268 [Pipeline] } 00:30:28.282 [Pipeline] // stage 00:30:28.288 [Pipeline] } 00:30:28.302 [Pipeline] // dir 00:30:28.308 [Pipeline] } 00:30:28.323 [Pipeline] // wrap 00:30:28.329 [Pipeline] } 00:30:28.342 [Pipeline] // catchError 00:30:28.351 [Pipeline] stage 00:30:28.353 [Pipeline] { (Epilogue) 00:30:28.367 [Pipeline] sh 00:30:28.650 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:33.938 [Pipeline] catchError 00:30:33.940 [Pipeline] { 00:30:33.955 [Pipeline] sh 00:30:34.239 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:34.499 Artifacts sizes are good 00:30:34.508 [Pipeline] } 00:30:34.521 [Pipeline] // catchError 00:30:34.531 [Pipeline] archiveArtifacts 00:30:34.538 Archiving artifacts 00:30:34.657 [Pipeline] cleanWs 00:30:34.670 [WS-CLEANUP] Deleting project workspace... 00:30:34.670 [WS-CLEANUP] Deferred wipeout is used... 00:30:34.676 [WS-CLEANUP] done 00:30:34.678 [Pipeline] } 00:30:34.693 [Pipeline] // stage 00:30:34.699 [Pipeline] } 00:30:34.713 [Pipeline] // node 00:30:34.718 [Pipeline] End of Pipeline 00:30:34.761 Finished: SUCCESS